[Wittrs] Re: formal arguments/thought experiments/1st and 2nd order props

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 19 Apr 2010 22:57:38 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
>
> >  Budd:
> >
> > >... I diagnose:
> >
> > > Yes, once you talk of functional properties as first-order properties, 
> > > you entertain Searle's biological naturalism!
> >
> > [Addendum: Note that Searle's biological naturalism is meant to be open to 
> > AI if the principles upon which it rests can be construed in the form of 
> > 1st order properties (the electricity is 1st order while the computation 
> > part of a program is 2nd order).  He is arguing that Strong AI is a thesis 
> > amounting to a claim that 2nd order properties, even in super bad, speedy, 
> > complex bulk (fat syntax?), is a noncandidate for a theory of mind--unless 
> > one is a behaviorist a la Dennett]
> >
>
> So he is saying that if you're Dennett this works in the real world but not 
> if you're Searle???

Learn a distinction or two, Stuart!!!  Robots may work in the real world..  
They don't have intentionality, though, if their essence involves partial 
characterization in fundamentally functional terms, like all possible software 
systems.  So, if, as Joe correctly points out, one can do safe science (my 
additional description) by redefining intentionality to be thin enough, then 
Dennett wins through definitional fiat.



>
> >
> > > > Stuart:
> >
> > > > So on your view computational processes running on computers are what 
> > > > Searle means by "biological naturalism"?
> > > >
>
> > > Budd:
> >
> > > No.  It is what you mean without knowing it.  You don't distinguish the 
> > > two.  Both are on a par...
> >
> > [Addendum: ...for you given how you are at a loss to understand why Searle 
> > argues against strong AI without being a crypto-dualist.  But he is not 
> > such, so....]
> >
>
> Assertions, assertions . . .

It was an addendum..  It was also part of a counterfactual:  you wouldn't have 
a problem with Searle if you distinguished between 1st and 2nd order properties 
and if you knew that it is this very distinction that motivates the first 
premise of the CRA.  Since you refuse to make the distinction, you can make 
your ignoratio elenchi sound more innocent than it is.  So, benighted or lying?




> >
> > Stuart:
> >
> > > > Uhhuh, yep, right! Now can you demonstrate any of this with some 
> > > > reasons to back up what is otherwise just bald assertion?
> > >
> > > You say here that I mean something other than I said and yet offer 
> > > nothing to support that.
> >
> >
> > To be more precise, what you say and what you think you say sometimes come 
> > apart--except in your recent reply to Bruce vis a vis fmri and the progress 
> > such allows for brain research.  I thought you offered some decent 
> > responses there.  Anyway:
> >
>
> > Maybe you missed the argument.  Here it is, again:
> >
> > Stuart writes:
> >
> > "You promised an argument, presumably to counter mine. Okay, I'll bite. But
> > where
> > is it?"
>
>
> > From April 15 I think:
> >
> > My argument starts by diagnosing your failure to distinguish 1st and 2nd 
> > order
> > properties.
> >
>
> A faux "failure" because no one in AI research ever argued for programs in 
> isolation (from the hardware they run on) as causing consciousness! It is 
> about and has always been about computational processess RUNNING ON COMPUTERS.

Is this another way of not stopping to think of the distinction between 1st and 
2nd order properties?


>
> > It continues by noting that you are left with no motivation for preferring a
> > computational theory of how semantics is achieved from Searle's theory which
> > bottoms out in 1st order properties of brains.
> >
>
> This isn't clear to me. What's your point?

The distinction/nondistinction.

>
> My argument isn't that syntax causes semantics or computer processes running 
> on computers can but only that the CR and its associated argument don't show 
> that they don't or can't.


Searle's argument shows the Turing test to allow for false positives.  So, the 
strong AI program is safe given that it is unfalsifiable in principle..

>
> > You end up endorsing Searle's view but without understanding that that is 
> > the
> > upshot of your failure to make the above distinction.
> >
>
>
> This is utterly absurd, Budd, and echoes the argument offered by PJ on 
> Analytic which you happily picked up from him at one point.


Hey, I made the same distinctions six years ago.  So there!


> But it's just an assertion as you have presented it and asserting something 
> doesn't make it true.

yada, yada.

>
> > You win. Upshot: So does Searle. Still want to argue with Searle? Then you
> > win and you lose, therefore you lose.
> >
> > QED.
> >
> > Or not.
>
> THIS is your argument you wanted me to attend to???

Yes!



> >
> > Presumably, you may want to insist that computers ipso facto do things 
> > differently from brains without, of course, knowing how brains do what they 
> > do.  Or maybe you enjoy Paul Churchland's thesis that brains are computers 
> > and do information processing and that we should be careful not to mire our 
> > research by paying undue attention to the crude lessons of learning to 
> > speak with language....
> >
>
> I don't think we can dismiss, on logical grounds alone either:
>
> 1) that brains operate like computers;

I charge "safe science" and unfalsifiable science at that..

 or
>
> 2) that computers can do what brains do even if they don't operate in the 
> same way.

Yes, the distinction involves 2nd order properties....


>
> > Put it this way again, again.  EVEN IF Searle is wrong to distinguish 
> > functional properties from 1st order properties (and I don't think he is 
> > and neither do I think Jaron Lanier is uninformed on the topic when 
> > critiquing the notion of more and more software, given the problems that 
> > ensue as we know how to make software these days while others like Dennett 
> > and Hofstadter are ducky with the possibility of tricks with 2nd order 
> > properties [anyone smell weak AI > here?]),
>
>
> Is this also supposed to be part of your argument?

Yes.  It makes for fun wherein afterward you go back to claiming crypto-dualism 
just because for you the CR implies.  I argue that it doesn't given the 
motivated distinction which, btw, allows us to understand exactly how software 
works.
>
>
> > it is this distinction that makes him a bona fide nondualist.  Any 
> > assertions to the contrary (Dennett's charge of crypto dualism in his paper 
> > "Granny's Campaign for Safe Science" for example) are simply
> > uninformed or a case of fabrication.
>
>
> Assertions are not arguments and neither are accusations.

Well, it is part of my argument.  So don't confuse a part with the whole, 
please....
>
>
> >  For strong AI to work, it must avail itself of 1st order properties.
>
>
> Nor is repetition.

Well, you could have charged "false" if so inclined, instead....
>
>
> >  Information processing is a highly functional notion, though, and can be 
> > seen to trade on 1st order properties.  The brain is a computer because 
> > everything under the sun can be so interpreted also.
>
>
> Then that's irrelevant to the question at hand and another Searlean red 
> herring.

So you say.  But you're full of red herrings.  One of them is conflation of....
>
>
> >  But I suppose one needs to be gifted with language in order to sort these 
> > issues out....
> >
> > But maybe I could never prove this to you because I have no good argument.  
> > Perhaps.
>
>
> Yes, perhaps this is just about dueling intuitions in the end.

Which Fodor would find very vulgar.  After all, Searle starts by noticing how 
computers work in the real world.
>
>
> >  I'm just trying to diagnose why you don't buy Searle's distinction between 
> > 1st and 2nd order properties as the root reason for concocting his CR, and 
> > summarizing it via the CRA, then summarizing it via axioms in Sci.  
> > American, then summarizing it instead in the form of eight points at the 
> > end of his APA address.
> >
>
> I've told you why in numerous posts here and elsewhere.

Bullshit.  You can start now, though, in a separate post if you wish.  Why is 
Searle wrong to distinguish these properties in discerning how computers 
actually work?  Create a new header and please tell me.  The reason I said 
"bullshit" is that you recently said that reading the target article may be 
otiose given different formulations of his argument over the years.  I'm 
maintaining that that is false given that his distinction between properties 
motivates all the versions.
>
> > Every version, including the first premise of the first "formal" CRA, is 
> > predicated on this very distinction.  You asked how he justifies the 
> > noncausality claim.  You got your answer.  And you vill like it and not 
> > like it!!!  :-)
> >
> >
> > Cheers,
> > Budd
> >
> >
>
> NO AI RESEARCHER EVER ARGUED FOR PROGRAMS IN ISOLATION FROM THE COMPUTERS ON 
> WHICH THEY RUN. And computers, of course, are physical and everything they do 
> is physically accomplished. Just like brains.
>
> SWM

Bullshit.  Computers rely on 2nd order properties for their accurate 
description.  And brains don't--at least if you're not a property dualist.  The 
capitals aren't going to erase in our minds the need for distinguishing 1st and 
2nd order properties.

But weak AI is perfectly acceptable and consistent with your capitals above....

And weak AI is not the same as Searle's biological naturalism.

Why?  Because the former entails 2nd order properties as doing much heavy 
lifting.  That heavy lifting may simulate.  The real heavy lifting of brains is 
in terms of 1st order properties only.  Unless you're a dualist.  Or a 
functional dualist.., you know, of the kind that can't speak straight when 
relying on 2nd order properties as if just like the 1st order properties of 
brains..


Cheers,
Budd



=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: