[Wittrs] Re: Consciousness in Three Lists

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 14 May 2010 00:15:25 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

> Stuart writes:
>
> "This business about it being about the hardware is just another false trail 
> of yours (borrowed from PJ on Analytic since I don't believe you ever made 
> the point in our earlier discussions on Parr's list)."
>

> I thought you were either retarded or playing a dubious game at Parr's list, 
> Stuart.  That was six years ago.  I haven't seen anything to contra-indicate 
> that either/or today.
>

Now there's a real serious argument!


> So the business is about the software?
>

About computational processes running on computers or, as Searle himself puts 
it, about "implemented programs."


> Earlier you said that hardware matters for complexity.  The CR was 
> underpecked as you said.
>
> Did you mean underspecked in terms of the complexity of the hardware or the 
> complexity of the software?
>


Underspecked in terms of the CR qua system not doing enough of what would need 
to be done to achieve anything comparable to what we recognize as an instance 
of understanding in entities like ourselves.


> I always thought that the CR was equivalent to a UTM if one understood the 
> original strong AI thesis.
>
> Even RUNNING PROGRAMS are formal--they don't even get started in playing the 
> game of what might cause and realize semantics and consciousness.
>


That's just the same old dogmatic assertion with "formal" being as ill-defined 
as "syntax" so often is when Searle is busy "coining" it. Absent a serious 
effort at definition, the claim could mean anything and probably does depending 
on your need at the moment.


> To think otherwise, you can go ahead and do a conflationary job with the 
> upshot that it is not someting Searle is arguing against.


But it looks like no one told Searle! Oh, I forgot, you must have missed that, 
even though I have made that point dozens of time by now.


>  And here you have to qualify the upshot by its being presented in a mode 
> which suggests the contrary.  IOW, you are thinking of strong AI in a way 
> which Searle isn't.


If Searle doesn't mean by "Strong AI" the thesis that consciousness can be a 
product of computational processes running on computers, then his "Strong AI" 
is as strawmanny as they come!


> But sometimes it appears as if you want your formal programs and their 
> causality in 1st-order terms too.
>


You just don't understand my point.


> I think the reason you think distinguishing 1st and 2nd-order properties 
> pointless is because you are at every turn conflating them with the upshot 
> that you're trying to have your cake and eat it too.


Hmmm, another compelling argument.


>  Peter Jones at Analytic was right to catch you harboring a position you were 
> trying (unwittingly as it turns out) to argue against.
>

PJ was wrong as you are in repeating the silly claim. If my position really is 
Searle's, how come Searle doesn't notice it when he contests Dennett's thesis 
with which mine is in almost complete sync?

Do you think anything goes as long as someone claims it and it sounds kind of 
cool?

Yet, not once have you ever responded to my point that, if PJ's claim (and, 
now, yours) were true, why wouldn't Searle, himself, know it?


> I just happen to think Searle's cake pretty good too.  What?  No more left?  
> You Slob!!  ;-)
>
>
> Cheers,
> Budd

Huh?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: