[Wittrs] Re: formal arguments/thought experiments/1st and 2nd order props

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 18 Apr 2010 02:39:38 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

>  Budd:
>
> >... I diagnose:
>
> > Yes, once you talk of functional properties as first-order properties, you 
> > entertain Searle's biological naturalism!
>
> [Addendum: Note that Searle's biological naturalism is meant to be open to AI 
> if the principles upon which it rests can be construed in the form of 1st 
> order properties (the electricity is 1st order while the computation part of 
> a program is 2nd order).  He is arguing that Strong AI is a thesis amounting 
> to a claim that 2nd order properties, even in super bad, speedy, complex bulk 
> (fat syntax?), is a noncandidate for a theory of mind--unless one is a 
> behaviorist a la Dennett]
>

So he is saying that if you're Dennett this works in the real world but not if 
you're Searle???


>
> > > Stuart:
>
> > > So on your view computational processes running on computers are what 
> > > Searle means by "biological naturalism"?
> > >

> > Budd:
>
> > No.  It is what you mean without knowing it.  You don't distinguish the 
> > two.  Both are on a par...
>
> [Addendum: ...for you given how you are at a loss to understand why Searle 
> argues against strong AI without being a crypto-dualist.  But he is not such, 
> so....]
>

Assertions, assertions . . .

>
> Stuart:
>
> > > Uhhuh, yep, right! Now can you demonstrate any of this with some reasons 
> > > to back up what is otherwise just bald assertion?
> >
> > You say here that I mean something other than I said and yet offer nothing 
> > to support that.
>
>
> To be more precise, what you say and what you think you say sometimes come 
> apart--except in your recent reply to Bruce vis a vis fmri and the progress 
> such allows for brain research.  I thought you offered some decent responses 
> there.  Anyway:
>

> Maybe you missed the argument.  Here it is, again:
>
> Stuart writes:
>
> "You promised an argument, presumably to counter mine. Okay, I'll bite. But
> where
> is it?"


> From April 15 I think:
>
> My argument starts by diagnosing your failure to distinguish 1st and 2nd order
> properties.
>

A faux "failure" because no one in AI research ever argued for programs in 
isolation (from the hardware they run on) as causing consciousness! It is about 
and has always been about computational processess RUNNING ON COMPUTERS.

> It continues by noting that you are left with no motivation for preferring a
> computational theory of how semantics is achieved from Searle's theory which
> bottoms out in 1st order properties of brains.
>

This isn't clear to me. What's your point?

My argument isn't that syntax causes semantics or computer processes running on 
computers can but only that the CR and its associated argument don't show that 
they don't or can't.


> You end up endorsing Searle's view but without understanding that that is the
> upshot of your failure to make the above distinction.
>


This is utterly absurd, Budd, and echoes the argument offered by PJ on Analytic 
which you happily picked up from him at one point. But it's just an assertion 
as you have presented it and asserting something doesn't make it true.

> You win. Upshot: So does Searle. Still want to argue with Searle? Then you
> win and you lose, therefore you lose.
>
> QED.
>
> Or not.

THIS is your argument you wanted me to attend to???


>
> Presumably, you may want to insist that computers ipso facto do things 
> differently from brains without, of course, knowing how brains do what they 
> do.  Or maybe you enjoy Paul Churchland's thesis that brains are computers 
> and do information processing and that we should be careful not to mire our 
> research by paying undue attention to the crude lessons of learning to speak 
> with language....
>

I don't think we can dismiss, on logical grounds alone either:

1) that brains operate like computers; or

2) that computers can do what brains do even if they don't operate in the same 
way.

> Put it this way again, again.  EVEN IF Searle is wrong to distinguish 
> functional properties from 1st order properties (and I don't think he is and 
> neither do I think Jaron Lanier is uninformed on the topic when critiquing 
> the notion of more and more software, given the problems that ensue as we 
> know how to make software these days while others like Dennett and Hofstadter 
> are ducky with the possibility of tricks with 2nd order properties [anyone 
> smell weak AI > here?]),


Is this also supposed to be part of your argument?


> it is this distinction that makes him a bona fide nondualist.  Any assertions 
> to the contrary (Dennett's charge of crypto dualism in his paper "Granny's 
> Campaign for Safe Science" for example) are simply
> uninformed or a case of fabrication.


Assertions are not arguments and neither are accusations.


>  For strong AI to work, it must avail itself of 1st order properties.


Nor is repetition.


>  Information processing is a highly functional notion, though, and can be 
> seen to trade on 1st order properties.  The brain is a computer because 
> everything under the sun can be so interpreted also.


Then that's irrelevant to the question at hand and another Searlean red herring.


>  But I suppose one needs to be gifted with language in order to sort these 
> issues out....
>
> But maybe I could never prove this to you because I have no good argument.  
> Perhaps.


Yes, perhaps this is just about dueling intuitions in the end.


>  I'm just trying to diagnose why you don't buy Searle's distinction between 
> 1st and 2nd order properties as the root reason for concocting his CR, and 
> summarizing it via the CRA, then summarizing it via axioms in Sci.  American, 
> then summarizing it instead in the form of eight points at the end of his APA 
> address.
>

I've told you why in numerous posts here and elsewhere.

> Every version, including the first premise of the first "formal" CRA, is 
> predicated on this very distinction.  You asked how he justifies the 
> noncausality claim.  You got your answer.  And you vill like it and not like 
> it!!!  :-)
>
>
> Cheers,
> Budd
>
>

NO AI RESEARCHER EVER ARGUED FOR PROGRAMS IN ISOLATION FROM THE COMPUTERS ON 
WHICH THEY RUN. And computers, of course, are physical and everything they do 
is physically accomplished. Just like brains.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: