[Wittrs] Re: Algorithms, Abstractions and Minds

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 01 Aug 2010 04:19:35 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:
<snip>is like arguing Searle's position.
> >

SWM:
> > BP, PP, S/H, non-S/H . . . isn't it all just otiose?
>

> These distinctions are not otiose for some, but otiose for others given 
> conflation of PP with BP.  This sort of thing is what Peter was > trying to 
> tell you


You should stop hanging onto Peter, Budd. I imagine you have tried to entice 
him to this list in order to help you out and maybe you will succeed. I'm more 
than willing to talk with him again if he can be civil. If not, I won't bother. 
But, really, this isn't about Peter's claims, but yours. Instead of calling for 
help, make your case.


> in the form of saying that you want things both ways--PP as PP and PP as BP.


This is simply false. It's a misunderstanding of the things I've said on this 
matter.


>  The latter interpretation is consistent with Searle and the former is what 
> Dennett means to distinguish from Searle's biological naturalism in the form 
> of an eliminativism (functional eliminativism which is mired in 
> observer-relative notions of just what a function is to be when gotten 
> computationally) such that the intentional drops out of the explanation at 
> the bottom level.  This is all misguided accord. to Searle because we in fact 
> want a causal account of intentionality.  And from reading through a bit of 
> Russell's _Human Knowledge:  Its Scope and Limits_, I see Russell as part of 
> the tradition which connects with Searle's and Fodor's concerns.  And I see 
> now why you are afraid Russell is taken seriously--it's the old story about 
> mental content.  Did you ever find out that Dennett is
> actually denying the premise that minds have contents?


That's your claim though you have never done anything to back it up. FYI, PJ 
used to make the same claim with regard to Dennett's denail of qualia, so maybe 
you can start there. (I've already dealt with that numerous times but if you 
really want to try to make your mistaken case re: Dennett, that might be a 
place for you to go.)


>  Did you ever think to understand functionalism as not giving a care about 
> content if content is relativized to functional content.


Whatever that means! As I've already said, Dennett never denies that we have 
experiences. He speaks numerous times of particular experiences: the way coffee 
tastes, the way the sunrise looks to us, the way a musical piece sounds, etc.


>  Did you know that the notion of a function of a function is too abstract to 
> count as bona fide mental content from a Russellian,
> Searlean and Fodorian point of view?


What is a "function of a function" and how do you think this accords with 
anything Dennett has said?


>  Did you know that that is why these guys have critiqued behaviorism 
> (Russell) and functionalism (Searle) and functionalism of mental content 
> (Fodor)?
> >

Again, what's your point? (Did YOU know that you have Dennett wrong? And 
Searle's CRA by the way since you don't fully grasp it.)

> >
> > > And you are forever having it both ways by saying that PP is more 
> > > powerful than serial processing given computational complexity--but 
> > > Searle points out that all PP can be serially computed by a UTM, which 
> > > the CR is.
> > >
> >
> > What "Searle points out", as you put it, is irrelevant to the issue because 
> > this hinges on whether or not we are speaking of a system level feature or 
> > something below it, i.e., a feature of the system's constituent elements.
>
>
> Don't change the subject so much, Stuart.


That is precisely the subject, Budd!


>  All PP can be computed serially.  And Searle is not arguing against system 
> level features OR constituent elements below such a
> level.  This is where your argument breaks down.


The only way for the CRA to lead to a true conclusion that computers can never 
be conscious is for consciousness to be identified with a feature within the CR 
AS IT IS SPECKED. Once we recognize that consciousness could be a system level 
feature (the outcome of multiple CR processes combined and operating in a 
certain way) rather than an element or feature of an element within the CR, the 
CRA's conclusion is no longer implied. Thus the CRA as an argument collapses. 
I'm sorry you can't grasp it.


>  You misdescribe the whole raison d'etre of the CRA.


I don't describe its raison d'etre at all. I accept Searle's argument as he 
makes it, that it is intended to support a conclusion that computers cannot be 
conscious in virtue of their programming. (Obviously, his argument is not 
directed to the idea that a soul might be infused into an inert machine, such 
as a computer, via transmigration, etc.)


> And if you want to reply that it is a strawman, then be consistent and 
> disallow ontological conclusions that Searle doesn't draw given the strawman.
>
>

Searle's conclusions are self-evident. And their implications are clear, even 
if Searle, himself, misses them.

>
> > Of course, Searle doesn't recognize this either, as far as we have seen in 
> > his arguments, so you are at least in his company on that.
>
>
> It would be good company and your argument is indeed missed by Searle for the 
> reason that it is very bad and ought not be considered.  I already considered 
> your argument and showed that it is premised on a misreading.
>
>


You didn't, and continue not to, grasp it. That is very evident in your 
numerous posts.


> > Too bad it is the wrong side of the debate. But I suppose this notion of 
> > system-level vs. constituent-level is one you are never going to understand 
> > (since you haven't thus far).
>
> Well, you did agree that Searle gets the piston/butter story correct.


So what? We're talking about his CRA (and his later argument of incoherence).


>  And when you insist that his CRA gets the wrong picture, you interpret 
> Searle as denying constituents that are physical when in fact Searle was 
> denying the coherence (later) and the value (earlier)
>of functionalist explanations of semantics.


Searle's two arguments yet again -

CRA:

1) Computers are programming and programming is merely syntax.

2) Minds have mental contents (semantics).

3) Syntax does not constitute and is not sufficient for semantics.

Therefore computers can't cause minds (be conscious).


Later argument from incoherence (introduced after the weaknesses of the CRA 
became more widely known):

1) Computer programming is neither syntax nor semantics in itself, it is just 
such and such physical events which take an outside observer to make it 
syntactical or anything else.

2) Ascribing syntax to computer programs adds nothing to them except an 
abstract idea that is not, itself, part of the physics of the events.

3) What is abstract cannot cause anything in the world (lacks causal efficacy).

4) It takes brains to cause consciousness which IS self-evidently in the world.

Therefore computers cannot cause consciousness (because it is incoherent to 
imagine that what is abstract can cause anything).


>  Go be with Dennett and deny semantic contents.
>

As I said, you just don't understand. In fact you are so far from understanding 
that you seem to lack even an inkling that you might not understand.

> >
> > Anyway, IF SEARLE WERE IN AGREEMENT WITH THE POSITION PUT FORTH BY DENNETT 
> > THEN WHY DOES SEARLE CONTINUE TO DENY DENNETT'S POSITION? You'd think he'd 
> > have figured out by now that Dennett's position doesn't contradict his even 
> > as he continues to contradict Dennett's!
>
> I explained this above.  And in other posts.  It is you that is the problem.
> >

My view and Dennett's are the same on this matter. So if I am "the problem" 
(really in agreement with Searle without realizing it!) then Dennett is, too. 
But Searle doesn't think Dennett is really agreeing with him and, therefore, he 
cannot think I really am either.

That you do is testimony to how far off the Searlean reservation you have 
strayed.


> > So what makes you think you know better than Searle what his position is?
>
> Bad inference.  You're playing a game from politics--define the other side 
> before they define you.  And if in a pinch, tweak it such that your position 
> is now consistent with Searle's.  And if pinch comes to maul, go ahead and 
> misdescribe what Searle was actually saying.  And then draw inferences from 
> your duck soup which can go blow on.
> >
> >

Verbiage, Budd, but useless. Look closely at what I said above vis a vis my 
view, Dennett's and Searle's.

> > > When you distinguish between the CR as underspecked, you are maintaining 
> > > that it is not complex enough in BP terms, which is Searle's position, 
> > > while maintaining that such is a computer.
> > >
> >
> > No, Budd. Since Searle denies Dennett's thesis that a sufficiently complex 
> > computational system (made up of a massively parallel system running the 
> > right programs) could do it, Searle is saying quite clearly that it isn't a 
> > question of complexity (robustness) but of the nature of the processes 
> > themselves (i.e., they are computational) that is the problem.
>
> He's denying that functionnalist sorts of computational explanation are good 
> ones.


Right, by denying that a computer can be conscious BECAUSE a certain computer 
system he has specked (the CR) isn't!


>  Now, if you already believe that there are no bona fide mental contents to 
> be had except ones that can be redescribed in functionalist terms, then you 
> might think to have your cake while eating it.
>

This is pointless. Talk about the substance instead of hiding behind metaphors.

> >
> > This is why I have said you don't really understand Searle's position. You 
> > are completely missing his point which is that, if a CR cannot understand 
> > (as we understand), then no other R (no matter how robustly configured) 
> > could do so! Of course, that is precisely the Dennettian claim, i.e., that 
> > that's what it takes (more robust configuration).
>

> And I argue that you don't understand what Searle is saying.  I have a prima 
> facie case when I point out that you misdescribe his
> position to draw your conclusions about his position.


Except you are wrong when you impute misdescription to me. So your basic 
premise is flawed.


>  If the configuration is explained in functional terms, Searle
> suggests that the explanation is too abstract.

It's about what computers CAN do, Budd, not about how we EXPLAIN what they do.


>  Surely Searle is not denying that R's like brains can do it.


That's where he falls into contradiction. However, he does make the point that 
the brain is NOT a computer in any meaningful way (as you have often reminded 
us) while his point about the CR is that it works LIKE a computer! The 
conclusion he wants us to draw is that since the CR cannot be conscious, no 
computer (no R) can be. But he has already excluded brains from the class of 
these R's by insisting that brains aren't computers in any meaningful way. (I 
have, however, noted that in this he puts the cart before the horse because he 
really doesn't know that brains aren't computers or computer like -- after all 
it is Dennett's thesis that they are -- so he can have no grounds for simply 
assuming from the outset that they aren't. However, he does take from the fact 
that the CR fails to be conscious, that therefore brains must be different from 
what CRs are. But that is a mistake since, as Dennett's thesis points out, the 
CR's flaw is not in its nature [what it is] but in the scope of the system it 
implements.)


>  And neither is he denying that some R's which are not human brains may do it.


He has certainly said that it is not out of the question that we may one day 
figure out what brains do in this regard and may be able to synthesize that on 
a machine. But, he insists, a computer will never, ever qualify. That is simply 
unsupported by either of his arguments.

I note that you seem to be the last one still trying to keep these two 
arguments afloat (except maybe for Searle himself). You must feel very deeply 
committed to them.


> Once your explanation is something other than BP, it is mired with having BP 
> (somehow) in cahoots with formal programming.

Oy.


>  But the cahoots don't connect, and so you conflate PP with BP


Argument by mantra again.


> but still want to distinguish your PP from what Searle would call BP (as in 
> brains or some other system explained in something other than purely 
> functionalist terms.
>

BP, PP, S/H, Non-S/H . . .

> >
> > Note that if the argument Searle derives from the CR (the CRA) does not 
> > apply to anything but a system specked at the CR level then it is a 
> > pointless claim because NO ONE THINKS THAT PROGRAMMING A MACHINE TO RESPOND 
> > BY ROTE MECHANISMS IS TO PROGRAM UNDERSTANDING IN THAT MACHINE! We can all 
> > agree on that. But, of course, the AI project is about much more complex 
> > systems, doing many more things than rote responding, than that! So if you 
> > are right, the CRA is a pointlessly trivial argument with no implications 
> > beyond the CR. If you have only built a bicycle, you cannot expect it to 
> > soar above the clouds!
>
> This is where you are talking about PP as BP.
>

I really need to start doing some snipping here as this is way too long but I 
am leery of it since I don't want to remove your claims, such as they are. It 
strikes me as not being sporting.

> >
> > Really, how hard is it to grasp this? But if you can't, let me again call 
> > to your attention the still more obvious fact that Searle denies Dennett's 
> > thesis and you think Searle's a pretty smart guy so why hasn't he figured 
> > out yet that there is nothing in Dennett's thesis for him to deny as your 
> > interpretation of the CRA clearly implies?
>

> No, it is your understanding that is in question.  Dennett really does 
> distinguish his brand of PP from BP because the system he's tinkering with 
> has a functional level of explanation all the way down, as Fodor pointed out 
> many moons ago.
> >

Quote Fodor here then so we can see what he has "pointed out" and consider it 
critically. Name dropping is not philosophy.

> >
> > > Searle's point is about how, and Neil put the point perfectly, 
> > > computational explanations in terms of programs (a functional type of 
> > > explanation) is not good enough.
> > >
> >
> > It certainly might be if understanding (and the other features of 
> > consciousness) are system level features.
>
> Bah humbug, because the system level features are described
> functionally or not.


It's not about descriptions but about the capabilities of real physical things.


>  If the former, then you might be conflating BP system features with 
> functionalist explanation if you don't understand that functional explanation 
> is a bit too abstract for Searle and Fodor; and if the latter you are 
> switching the subject to BP, which Searle is not arguing against.  Again.
>

All right, this is getting nuts. If all you can do is say the same things over 
and over again, I will snip away what follows and "go home". (I'll read a 
little further before deciding.)

>
>
> >In that case, the problem lies NOT in the constituent processes but in the 
> >system that has been specked into the CR. Add more processes doing more 
> >things in the right way (interactively, etc.) and you get a more robust 
> >system.
>
> BP.
>

BP just got a new CEO.


> > That a slimmed down, barebones system can't match what a brain can do says 
> > nothing about what a more complex system could do.
>
> All PP can be done serially.
>

Massively parallel processing adds things including:

1) Capacity
2) Simultaneity
3) Interactivity
4) Unpredictability (randomness)

And it's all done with serial processing. Oops!


> > Of course, for that you need capacity equivalent to brains.
>
> BP.
>


Who says otherwise? It's all about the physical operations of computers!


> > Dennett's thesis is that means you need a massively parallel platform 
> > because that's what he claims brains are when you get down to it.
>

> Are they by definition doing parallel information processing ONLY (Dennett); 
> or is it rather the case that there is just BP without necessarily describing 
> the brain as doing ONLY parallel information
> processing?


This isn't about different descriptions. It's about whether both brains and 
computers answer to the same approximate description.


> Certainly there is a description of a brain such that there is absolutely no 
> information processing going except at the system level
> where intentionality is at.


This confuses ideas of "information processing" and, perhaps, "intentionality".


>  But perhaps we redescribe some of the BP in functionalist terms such that 
> sensory modalities differ via different sorts of information processing.  
> What information processing actually causes consciousness?  What?  
> Information processing causing consciousness?  I thought some brutish system 
> is necessary for THAT.
>

As I noticed above, you are confused about "information processing".

> >
> > Dennett may or may not be right but Searle's CRA has no implications for 
> > his claim and especially not if we take your interpretation which I think 
> > even Searle would balk at!
>
> I really see you as not knowing what the hell you're talking about, still.  
> So sad.
> >

Okay, this has gone on almost long enough!

> >
> > > And it is not because Searle is wedded to extra stuff whereas Dennett 
> > > isn't.  What Dennett is doing when making that claim is just dodging a 
> > > type of better psychology than can be had in functional terms.
> > >

> > Can you argue for that or do you just want to get by with another 
> > unsupported assertion?
>

> Functionalism is too abstract.


What is "too abstract" about it?


>  Ontological subjectivity is less abstract because real.


The question is where does subjectivity come from, how does it arise? That 
there is subjectivity is not the question. No one challenges it. The question 
is how is it possible in physical entities like brains?

But where is your argument that "Dennett is . . . just dodging a type of better 
psychology than can be had in functional terms"?


>  Functionalism nets you behavior of sensory modality but in terms of 
> information processing.


What is "information processing" do you think?


> We are not, so says Searle, to think that information processing is 
> equivalent to what a machine has to do in order to pass the causal
> reality constraint.


Why not? Because he says so or because you say he says so? How is that a reason 
to do or not do anything?


>  But I don't suppose to have proved it just now.  Suffice to say then that 
> when push comes to shove, you argue BP anyway, which is Searle's (but not 
> Dennett's!) position.
> >

Not only haven't you "proved it", you still haven't attempted to argue for your 
claim that "Dennett is . . . just dodging a type of better psychology than can 
be had in functional terms" which is what I challenged you to do above!

Anyway, my view is consistent with Dennett's and Dennett's is consistent with 
mine so you can't escape the fundamental flaw in your claim by pretending that 
we are making different points and that you are only arguing against my 
position, not Dennett's. Or rather, you can certainly pretend and even convince 
yourself you aren't pretending, but I'm guessing that the pretense will be 
pretty obvious to many here (if they are bothering to read along). In fact, on 
this issue, there is no daylight between my view's and Dennett's.

> >
> > > That's why I think Bertrand Russell was right to insist on the absurdity 
> > > of a view such as Dennett's when it comes to psychology.
> > >
> > >
> > I wasn't aware Russell had ever considered Dennett's thesis. Have you some 
> > evidence of THAT claim? After all, they are hardly contemporaries in the 
> > field even if Russell lived a very long life.
>
> Well, it's not as if Dennett's view is all that new.  So, ...  But yes, an 
> anachronism nevertheless.
> >

It is and Dennett's view is new enough.

> > >
> > > >

<snip>

>
>
> >  But then I have already pointed out that Searle is in self-contradiction 
> > vis a vis his treatment of brains and computers and that that is a big part 
> > of his confusion!
>
>
> At this point, I will remind everybody that Searle is critiquing functional 
> explanations.  Should I use exclamation ponts now?!
>
>

You should try to get the argument right.

>
> > So we can find him affirming things in one place while denying them (or 
> > arguing in a way that is only consistent with their denial) in others! 
> > That's what it means to be in self-contradiction!
>
> This is all because you don't share his distinctions.


I suspect you don't either. But even if we both don't, there is no reason 
anyone should. The issue, finally, is about whether some computer, somewhere, 
can be built to be conscious, not what distinctions we rely on in describing 
what that computer is or is doing.


>  And if he's arguing against a strawman, you can't make up crap just because! 
>  You try.  But you are just not understanding the whole point of the CRA.
> >
> >

> > > when denying the coherence of functional explanation of cognitive states.
> > >
> > >
> >

> > His incoherence argument (with which he tried to replace the CRA while 
> > never explicitly giving the CRA up!) is worse than the CRA since it 
> > completely misses the point about computers and computationalism.
>
>
>
> What point is he missing?  That PP can be conflated with BP?  Deja vu!

Do you really think there's any greater chance of your getting it if I repeat 
for the hundredth time what point he is missing? If you didn't manage to 
understand the first 99 times, there's little reason to believe the 100th will 
do the trick!

<snip>

> > > Searle's view is about system level features.
> >
> > Searle is confused about that because he appears to take that view (albeit 
> > without fully explicating it) vis a vis brains but the CRA depends on a 
> > failure to grasp that view. Once you grasp it, the power of the CRA to 
> > compel the conclusions he claims for it collapses. (Since I have explained 
> > this so many, many times, I will not do so again. Just go back and read my 
> > old posts on this, which are legion.)
>
>
> And they all suck noodle soup because you misdescribe the CRA as a way of 
> arguing against physicalism when in fact it is about the failure of 
> functionalist explanation to provide necessary and sufficient conditions for 
> semantics.  This is the beginning and end
> of the story.  Repetitions to follow, though.


Oh don't bother to repeat because if that's all you do, I'll pass, thanks.


<snip>

> >
> > >  I think you are really bad at understanding Searle's point or are just 
> > > making up things for fun.
> > >
> > >
> >
> > Well I guess that's all you have left to say in support of an obviously 
> > insupportable claim that you cannot divest yourself of.
>
> But I've been showing exactly how you go about your bad argument.  It is bad 
> because it misrepresents Searle's CRA and overall position. After having 
> shown this, then either you understand Searle's position better or you stick 
> with your difficult monster of interpretation you gave birth to six years ago.
>

You don't grasp the CRA at bottom nor what I have said concerning it.

>
> Does your son, by the way, buy what you're selling in terms of Searle 
> scholarship.  Hopefully he cares more about being practical!
>
>

My son has given up philosophy but he was raised religious (as his mom is) and 
wants desperately to believe that mind is more than just a physically sourced 
phenomenon. I don't press him because I don't want to undermine his belief 
system (though I suspect at some level he is already moving away from it -- but 
such movements often take a long time, sometimes a lifetime).

> > > >
> > > > But, of course, you will never see this and I have quite given up on 
> > > > expecting you to since you don't even fully grasp Searle whom you have 
> > > > set yourself to defend!
> > >
> > > I think I've explained exactly what Searle is denying with the CR.
> >
> >
> > You have totally missed the point of his claims as evidenced most clearly 
> > by your remarkably ridiculous notion that Dennett's thesis doesn't 
> > contradict Searle's even while both Searle and Dennett think it does.
>

> You are an awesome windbag!
>

You are the one who goes on and on interminably and constantly repeats the same 
mantras while failing to fathom the points being made. Well one of us, at 
least, is just blowing smoke.

>
>
> > This either shows you are smarter than the both of them or that you don't 
> > understand the real issues in this debate. Frankly, I think the 
> > preponderance of the evidence favors the latter conclusion.
>
> Thanks for that.  It betters the looks of my position since my position is 
> that you're bad at logic and reading in general.  But you could be 
> play-acting for all that.  If not, I believe you are seriously 
> benighted--though other smarty-pantses have been quick to play the 
> "everyone's a zombie" card.
> >

Oy.

> >
> > >  It is the denial of the functionalist sort of explanation to arrive at 
> > > necessary and sufficient conditions of
> > > semantics/consciousness.  End of story.
> >
> >
> > You can only end a story you get.
>

> Shall I interpret your prouncements seriously?  Then the above shows that you 
> wouldn't comment on the specific point, which I hold.  End of (this) story.
> >


I'm sure you do.


> >
> > >  What PP proponents are doing is just conflating PP with BP.  But if you 
> > > want your functionalism, you have to distinguish BP from PP without 
> > > conflating the two types of explanation.
> > >
> >
> > The only conflator here is you, Budd.
>
> Bullshit, of course, though you might be too benighted to understand that you 
> are no fertilizer in this debate.
>


BP, PP, S/H, non-S/H, otiose, "not machine enough" -- oy!


>
> > In your preferred terms, there is no PP in this debate except insofar as it 
> > is an application of BP in which case it is only the BP that is at issue, 
> > not some rarified non-thing called PP.
>
>
> Nice try.  I've been pointing out that is YOU that is trying to have things 
> both ways and it is Dennett who distinguishes PP from BP simpliciter.


Try actually reading Dennett. You can start with that text I transcribed onto 
this list a while back.


> Searle is saying that BP simpliciter without interpretation of BP as PP 
> INFORMATION PROCESSING is a good nonfunctional way to look at
> possible systems that pass a causal reality constraint.


Ah yes, add "causal reality constraint" to your list of mantras above! Don't 
you even see that your argument is just attempted fancy verbiage signifying 
nothing that is particularly coherent?


>  Once the system is defined functionally as PP (Dennett) without conflating 
> the PP with BP, then you have a system that is subject to the CRA because it 
> is about functional explanation which has an abstract character even if one 
> wants to say that there is BP
> happening.

Dennett is talking about computers which he describes as the combination of 
hardware and software. The brain, he suggests is like a massively parallel 
computational processor and the features of consciousness are the various 
"virtual serial machines" running on the real (read physical) parallel 
processing machine. Dennett isn't speaking of abstractions though Searle (and 
you) want to reinterpret his words in such terms!

>  So why not BP without PP?  Well, someone defined willy-nilly that all the 
> brain is doing is parallel information processing.  Well, we > don't think 
> that's all.


Who's "we" kimosabe? You and buddy Searle? Okay, so you "don't think that's 
all". Big deal. That has no logical implications in this debate at all. It's 
just a statement of what you think. Make an argument for god's sake already!


> It is doing BP without our having to redescribe all it does in terms of 
> information processing.
>
>

I guess clarity is still not your strong suit.


>
> >
> > > >
> > > > > Then listen again to Peter's point about PP proponents who 
> > > > > distinguish PP from serial processing in a way that amounts to BP, 
> > > > > which Searle is not arguing against.
> > > > >
> > > > PP is BP (using your ridiculous lexicon).
> > >
> > > You are seeming more and more like an idiot;
> >
> > Oy.

> Well, why is it ridiculous then?  Perhaps because you are used to conflating 
> them?  Don't be such an idiot--distinguish them if you are to understand that 
> there is that abstract information processing description of the brain which 
> is ineliminable from PP.  For Searle, PP is eliminable because BP need not be 
> defined as informaton processing.
> >

This has nothing to do with abstractions. It's about what real machines can or 
cannot do in the real world.


> >
> > > but our disagreement is about whether Searle is making a good point > 
> > > about functionalist sorts of explanation.
> >
> >
> > It's not about picking our favorite explanations. It's about what can 
> > actually be done with certain kinds of machines.
>
> And it helps if you know something about the machines.  How are they defined, 
> say?  Are some machines of the nonsoftware/hardware type while others are S/H 
> simpliciter, like any serial or parallel computing?!  You know, ridiculous 
> things like that.
> >

Oy, here we go again! Add "simpliciter" to your lexicon of mantra terms.

Are some machines computers and some not? Well, yes. So?

> >
> > >  For Searle, PP is not BP because it carries a functional type of
> > > explanation since it is still about computation.
> >

> > I agree Searle does share this particular confusion with you.
>
> I don't think it is a confusion, Stuart.
>

Right, you're too confused about it.

> > But just because he does is no argument that he is actually right! A 
> > confusion is a confusion, no matter who is confused.
>
> This is another suck-ass comment.  Let's leave Aesop out of it.
> His distinction is not confused.


His argument is because various of his terms and premises are as I have shown 
hundreds of times by now.


>  And your unwillingness to make the distinction is your reason for calling 
> him confused.  And your unwillingness is not shared by Dennett, who does 
> distinguish his brand of PP as functionalist--inspired by Witters' logical 
> behaviorism.
> >


Wittgenstein was hardly a behaviorist but that is not the issue. What is at 
issue is whether there is daylight between my position and Dennett's. Can you 
cite anything specific that demonstrates a difference in our views on this?

(I'm going to have to start snipping here more aggressively. If I take anything 
out you think isn't "otiose", feel free to bring it back in if you must. But 
this post is by now way too long!)

<snip>

>
> > . . . this harping on so-called "PP" is much ado about nothing since no one 
> > is arguing for some abstraction as a source or cause or producer of 
> > instances of the features we recognize by the term "consciousness".
>

> But you miss that they are fine with functionalist explanations.  Searle is 
> not.


THIS ISN'T ABOUT WHO IS "FINE" WITH WHAT EXPLANATIONS. IT'S ABOUT WHAT WE CAN 
GET CERTAIN KINDS OF MACHINES TO DO!


> And he shows why not in the CRA.


The CRA is a disaster as an argument. If it weren't, he would never have had to 
come up with his replacement argument from incoherence!


>  If you want to conflate PP and BP, you still got Searle's position, but just 
> with an inconsistent description of it... so that you can conclude he's 
> confused but really only you are or are playing a stoopid game with me.
> >


You just don't follow.


<snip>


> > > or own up to the upshot of functionalist explanations which are 
> > > eliminativist--which is ridiculous as Russell points out in _Human 
> > > Knowledge:  Its Scope and Limits_.
> > >
> > >
> >
> > Give the argument, don't just name-drop! Russell isn't here. You are. Or at 
> > least you seem to be.
>

> 1.  Functionalist explanations are explanations from a point of view about 
> which functions are being defined as happening either by programming or 
> observation.
>
> 2.  Functionalism is mired with observer-relativity until we get at the 
> bottom level where intentionalty drops out (this is recursive decomposition).
>
> 3.  The bottom level, for a functionalist like Dennett, would be in terms of 
> zeros and ones.
>
> 4.  The zeros and ones are likened to the firing/nonfiring of neurons.
>
> 5.  Any system of zeros and ones can be given all sorts of descriptions 
> having nothing to do with the system in BP terms (actual neurons along with 
> chemical reactions).
>
> Ergo, 7.  (quite informally though)  All functional explanations allow for 
> software (programming whether in series or parallel) to come apart from 
> hardware no matter how many times Stuart wants to conflate PP and BP while 
> thus sharing Searle's basic position.
> >

Since this is an argument about what real physical entities can or cannot do, 
the fact that different levels of operation can be described in different ways 
is irrelevant to what those levels of operation up and down the line can do at 
our level of observation.

(So you are saying that Russell made the above argument? If so, if that is 
really his argument and not your misstated version of it, then my respect for 
him drops several notches here. But then he did cease adding to the original 
philosophical opus back in the early twentieth century. What came after tends 
to be fluff or popularizations.)


> > >
> > >
> > > > There is no separate PP which, finally, is just a particular 
> > > > configuration of what you call BP. Thus the CR is one configuration of 
> > > > this BP and the more complex system envisioned by Dennett is another. 
> > > > This is finally about configurations not the quality of the parts. Get 
> > > > it? (Probably not but what the hell!)
> > >
> > >
> > > _You_ still don't get it.
> >
> >
> > No, you . . .
>
> At least wait for the reasons!


You gotta be kidding. You have reasons?


> >
> > > Searle's critique is not about the quality of the parts.
> >
> > That is precisely what it is about and merely denying it isn't enough.
>
> It is also not enough to insist on misdescribing what Searle says.
>

>
> > Look at the CRA itself. (But then that never helped before, did it?)
>
> In "Minds, Brains, and Programs" (the target article where the CR is 
> invisaged), one learns what it might be like to have a functional explanation 
> of semantics come apart from bona fide semantics.  It is of no use crying 
> about a distinction Searle is making, when denying
> the distinction nets you Searle's position.


THAT is another of your empty mantras as I've shown dozens of times here. You 
really have no grasp of why that is a ridiculous claim, do you? You read it 
once, when PJ asserted it, and fell in love with it because it sounds so good. 
And yet it is so utterly bogus and wrongheaded -- but you cannot or will not 
see that.


>  But then the system repliers simply contradict their earlier claim when 
> saying that understanding is somewhere else in the system
> because the system was invisaged as BP anyway.


Do you really think just expatiating is enough?

HOW DO THEY "CONTRADICT THEIR EARLIER CLAIM"? Which earlier claim? Which later 
claim? What is the contradiction. Say what you mean, don't just cryptically 
allude to it.


>  The interesting thesis was about functional explanation and how one can do 
> cognitive science without looking at brains, even though the proposal is in 
> computational terms that might resemble the brain > as doing INFORMATION 
> PROCESSING.


No, the question is whether brains work like computers. One way to get at that 
is to see if a computer can be brought to do what a brain can.


> Searle is simply saying that the information processing addition to BP 
> doesn't amount to any more BP.  So just BP and shut the functionalist up.
> >

And Dennett is simply saying that all the features we associate with being 
conscious can be produced on a computational platform with computational 
technology. And computers are manifestly physical entities.

> >
> > >  It is about functionalist type explanations not really netting us any 
> > > hope of understanding necessary and sufficient conditions for bona fide 
> > > consciousness and semantics.
> > >
> >
> > It is NOT about different kinds of explanations but different possibilities 
> > we can achieve with particular physical things.
>
> Okay, BP again.  That's cool actually.
> >

BPPPBPPPBPPPBPPPBPPPBPPPBPPPBS

> > >
> > >
> > > >
> > > > > At last, you'll understand that your critique of Searle was a 
> > > > > long-winded tirade amounting to his position
> > > >

> > > >
> > > > If he was really arguing against your notion of a certain kind of 
> > > > explanation (PP rather than BP) then his entire thesis is a strawman 
> > > > and the CR and its conclusions utterly irrelevant to the question of 
> > > > whether computers can be engineered and implemented to be conscious. 
> > > > That it is, finally, BS (since you are so enamoured of the magic of 
> > > > acronyms and initials).
> > >
> > > This just shows exactly how ignorant you are of the literature.
> >
> >
> > Or how thick you are with regard to the issue!
>

> Well here's a perfect occassion to expose YOUR thickness.  If a strawman, you 
> can't conclude anything about what Searle has in mind by way of ontological 
> commitment.  But you actually go for it and fall on your face.  And there's 
> the mud, and you're just a sorry mess of an analyst.  Go see a better one.
> >

Oy.

> >
> > >  The systems reply is just contradicting an original claim made in the 
> > > literature.  I'm happy to hear that some haven't actually held the thesis 
> > > of strong AI as defined by Searle.
> > > >
> >
> > There have certainly been many ideas and theses in the AI field but I have 
> > never encountered anything in "the literature" or in the claims of AI 
> > researchers elsewhere, that supports a view that computationalism is an 
> > argument for the causal efficacy of an abstraction. That is simply Searle's 
> > misunderstanding. And yours, apparently.
>

> Functionalist explanation of physical processes is indeed a more abstract 
> form of explanation than explanation in terms of BP
> simpliciter.

That's a fantasy in your mind and maybe in the mind of some, like Searle, who 
desperately cling to the idea that minds aren't reducible to the merely 
physical.


>  We read functions into physics.

And that's irrelevant to what brains do -- or what computers do.

<snip>


> > > Searle is not arguing against machines being conscious, whether 
> > > artificial or organic.
> >
> >
> > Budd, try to read what I write in context, okay? My reference to machines 
> > comes down to a certain kind of machine.
>
> Oh.  PP?
>

Computers or their equivalents.


> > Obviously I do not argue that any machine can be conscious. I argue that 
> > there is nothing in principle that precludes a machine from being 
> > conscious. As to what kind of machine might qualify, note, again(!), that I 
> > am referencing computational machines, i.e., computers. So my reference 
> > above to "machines" is a reference to generic machines. The argument I am 
> > making, however, is about a particular kind of machine, one that can do 
> > what brains can do.
>


> So, defining PP in terms of what brains are doing.  Watch out!
> >

That's the point. It's an empirical hypothesis that says brains work this way 
so if we can build a machine that works this way it should be able to do what 
brains do. This isn't about definitions, it's about real empirical research.

> > As we have seen and discussed ad infinitum here, the Dennettian thesis is 
> > that brains operate like computers, that, in fact, they are a kind of 
> > organic computer. If this is a correct interpretation of what a brain is, 
> > then there is no reason, in principle, that an equivalent computer cannot 
> > do what a brain can do. Searle's CRA which is based on the failure of a 
> > computational system specked in a very limited way purports to show that no 
> > computational system can succeed.
>
> All PP can be done serially.

Irrelevant since the thesis involves massively parallel processing. If the 
brain has X capacity and a serial processor lacks it, then you need whatever 
configuration of processors is necessary to get X capacity. The Dennettian 
thesis proposes that a massively parallel processing platform (many serial 
processors working together running many different operations programmed into 
them) could do what brains can do. The thing to do is design and test a real 
world version of the theoretical model, not try to argue against it a priori! 
Nor does the fact that a limited system, such as the serial CR, cannot achieve 
consciousness say anything at all about what a more robust system could achieve.



>  So, BP.  And maybe PP = BP.  Both Searle and Dennett think PP is not BP even 
> though for Dennett we can give a PP explanation to some BP.  For Searle, 
> though we can give it, it won't amount to good
> science


Oh right, another of your mantras "not good science" (as if asserting that 
something is or isn't is any kind of an argument!).


> because functionalism is mired by looking at everything as if it has to be 
> some function or other.  Functions are abstract notions we try to see/get 
> implemented into physical machines like computers or
> PP machines which willy-nilly are not necessarily UTM only.


This attachment of yours (and Searle's) to claims of abstraction is where you 
go off the rails.


<snip>


> > But Dennett argues that this is misleading because it conceives of what 
> > brains do as being separate and apart from the constituents in the CR.
>
> The effing constituents in the CR again.  Searle is arguing about 
> functionalist sorts of explanation.  Just because he's not into PP doesn't 
> make him into anything other than BP.  Your argument just conflates PP and BP 
> and you emphasize his denial of PP!  Na ganna du at.
>

Try looking at the CRA without prejudice if you can. If not, then you will 
never understand this.

>
>
> > If the features brains produce are not to be found in the CR, then, the 
> > argument goes, they cannot occur in ANY configuration of those same 
> > constituents.
>
>
> Effing constituents again.  Look, it's about ANY functionalist explanation 
> being good enough.


It's about what consciousness (the features of) finally are.


> Of course, if we are zombies after all, it would seem that functionalism is 
> as good as it gets.

Right, add "as good as it gets". I forgot that one above!

The problem is you don't grasp the significance of what it means to claim we 
are all zombies.


>  But it is not and, as it happens, functionalist explanations can only be had 
> by those who have semantic content.


So (freakin') what????


> If one's eliminativism rules out his understanding of functional explanation, 
> then..


Oy!


>  But if functionalist explanation presumes observer-relativity (It is we that 
> search for functions when in fact only causes are necessary.

You're all tangled up on different levels of explanation!


> But some just conflate causal explanations with functional ones.  They do 
> have overlap but they are not the same sort of explanation.
>

Oy.


> > BUT IF THE FEATURES BRAINS PRODUCE ARE SYSTEM-LEVEL, RATHER THAN STAND 
> > ALONE IRREDUCIBLES, THEN THE ONLY PROBLEM THE CR EXPOSES IS THAT THE CR IS 
> > AN INADEQUATE SYSTEM. OF COURSE THIS SAYS NOTHING ABOUT THE POTENTIAL 
> > ADEQUACY OF MORE ROBUST SYSTEMS.
>
> BP.
>

Oy.

> >
> > So the point is to test out a thesis like Dennett's empirically, rather 
> > than rely on the logical denial found in Searle's CRA which hinges on the 
> > suppressed premise that the features of mind are not reducible to some 
> > underlying complex of features that aren't, themselves, features of mind.
>
> Actually, it turns out that functionalist explanations are prone to the above 
> objection.  Hence Searle calls them abstract for that very reason.  You 
> should read and find out.
> >

The only reason they might be "prone" as you say is if the objector is confused 
about what role the "abstract" has in all this!

> >
<snip>

 > . . . Try to read the argument clearly (mine and his, actually).
>
> Some PP that causes consciousness is equivalent to some BP.  I get it.  It is 
> a nice Searlean position to have.  Robust and all.
> >

You will never get this. I am wasting my time.

> >
> > > Evidence for this is just how bad you go about handling compound 
> > > sentences with an awareness that the issue is fundamentally about
> > > different types of explanation.
> >
> >
> > Its about whether certain types of machines can do certain kinds of things, 
> > NOT ABOUT HOW WE CHOOSE TO EXPLAIN WHAT THEY DO!
>
> Yada yada.  I've talked enough about both.
> >

You sure have though you haven't said much. I tell you what, Budd. I am going 
to leave you to your near religious devotion to Searle. There are some people 
who cannot be moved, some people who cannot change or grow their ideas. I 
believe you are one of these.


<snip>


> > Well there you go then! If (Data's) computational brain can do roughly what 
> > our organic brains can do, you will agree he is conscious. So what's your 
> > problem? On the other hand, Searle's CRA denies that possibility. So are 
> > you secretly in Dennett's camp after all?
>
> You just haven't got it then about what Searle is critiquing.  We are largely 
> in agreement.
> >

Not if you believe the CRA shows that computers can't be conscious or that the 
later argument shows it! Or that Dennett's thesis can't work for those logical 
reasons!

<snip>

>
> Well, functional role semantics is all about how there is holism--so much in 
> fact that it is a wonder whether we ever understand each other.
>
> Cheers,
> Budd

Relevance? Oh never mind. I don't want to drag this out any longer.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: