[Wittrs] Re: What the Man in the Room Knows (and when does he know it?)

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 24 Mar 2010 19:48:50 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "iro3isdx" <wittrsamr@...> wrote:
>
>
> --- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@> wrote:
>
>
> > Better yet, I'll post the relevant section.
>
>
> >> My response to the systems theory is quite simple: Let the individual
> >> internalize all of these elements of the system. He memorizes
> >> ...
>
> I'm surprised to see you taking that part seriously.  It's a  bullshit
> answer.  If Searle does not have a better response that [than] that to the
> Systems Reply, then Searle does not have an answer to  the Systems
> Reply.
>
> Regards,
> Neil

But your reply is a bullshit reply if you don't go anywhere toward less 
bullshit.  Maybe you don't know what Searle was getting at in his reply?  Maybe 
you do.  Let's assume you do and also believe that we shouldn't take AIers so 
seriously.  If so, then any bullshit will do.  Isn't that how Searle is handled 
quite sweepingly here?  You don't take him seriously and don't have to read him 
carefully as a result.

I always thought that the "stuff" being internalized in Searle's reply to the 
systems reply was the "stuff" of computation.  Computational functionalism is 
not the same thing as type-physicalism, as you prolly already know and as 
Stuart may or may not know..  So, the systems reply is either a bullshit reply 
or none at all, meaning that it may be a reply that changes the subject.  It 
may even change the subject and amount to agreement with the spirit of Searle's 
claims for all those who are willing to flesh computation AS physics.  Any 
remaining disagreement would be merely apparent and part of a bullshit-fest.

I like to keep in mind that what computation theory discovers can in no way be 
refuted by physics.  But that doesn't mean that computation describes physics 
in any explanatory way.  For example, we can have a mathematical theory of 
perception while not feeling satisfied that we are describing the actual 
mechanisms appropriately.

Another example, assume (and you must) that computational functionalism amounts 
to a thesis fleshed in the form of first-order physical properties or not:

1.  Computation fleshed entirely in first-order property terms.  I.e., 
computation is physical.  I believe Josh and Stuart have enunciated this thesis 
which amounts to not understanding the very motivation of functionalism; but to 
their credit, they do understand that it is not chauvanistic toward the 
biological..

2.  Computation is abstract and though we can flesh computation via physical 
hardware that run abstract programs, the programs AS SUCH (for those interested 
in actual thought content) can't cause anything save what the programmer had in 
mind to cause as an extention of his mind/hands via logic gates which 
immediately involve a system decomposable to abstract program and physics.  Say 
it is all physics and you lose the program.  Say that it is all program and you 
lose the physics.

So, Searle is assuming 2. above and saying that it doesn't matter how much 
"computation" is internalized.  No semantics no how--and why else would anybody 
suppose the thesis that "minds have semantic contents" is up for grabs if they 
were not already mired by some philosophical picture that does no work?  Well, 
do they mean it?!  ;-)

Searle hardly needs to say anything else in his reply to the systems reply.

If, on the other hand, one is assuming 1. above, then one is changing the 
subject in either of two (or more) ways:

3.  One may claim that Searle is not understanding the very thesis of the 
research program those who he dubs strong AIers.

4.  One may claim that it is wrong to draw a sharp distinction between software 
and hardware.

So, Searle's "bullshit answer," to quote Neil, amounts to drawing a distinction 
between software and hardware.  The point is that no amount of software (in 
serial or parallel form) is gonna possibly amount to semantics.

When it is pointed out that no human could possible think so fast as to 
internalize the software, this is a bullshit reply to Searle if the distinction 
between software and hardware is maintained whereby the software is understood 
as abstract symbol manipulation.

OTOH, assume one conflates software and hardware.  Then one is no longer 
talking of a system with software/hardware separability.  One is talking about 
a type of system that Searle is not arguing against.

Funny how this works.  For a successful understanding of where Searle's reply 
is weak, one has to assume his position.  And for a successful understanding of 
where Searle's reply is knock-down, one only needs to understand how computers 
actually work.

Now, when Stuart has a problem with Searle both not telling us how brains do it 
while arguing against the very coherence of a computational theory of mind, he 
likes to assume Searle's physicalism without understanding what a computational 
theory of mind amounts to.  It amounts, in part, to a theory wherein we can all 
stop making sense.  The thesis is, rightly, compatible with there being no 
semantic contents in minds.

Now, if we could just buy that, then there would be no reason to debate.  OTOH, 
some thik it just fine to have bullshit debates.

So, I'm coming down hard on Neil here.  Bullshit is as it does.  And what have 
computers done for a theory of mind lately beside get us to change the subject?

OTOH, I'm all for pointing out that the issue of mind needn't be as drawn out 
as it is here.  I'm willing to point out that the long, drawn out debate is one 
of purpose.  It is perhaps designed to show something.  It is designed to show 
just how long the other side is willing to be mired in language games by 
default.

But is that really so?  I think not.  Poof!

Another irony:

Wittgenstein warned about philosophy offering theories.  Hacker's 
Wittgensteinian bent makes him say the darndest things as if AI is a more 
respectable way of investigating mind than an inductive one of actual brain 
research.

Stuart used to say (I wonder if he still believes it?) that Searle was trying 
to demolish a research program via a logical argument.  On the contrary, Searle 
doesn't argue with weak AI (think of strong AI minus the claims that Searle 
rightly or wrongly attributed to strong AIers given, um, what they said).  
Instead, with Searle you have two research programs.  With Hacker you have only 
one, as if Searle's biological naturalism is a conceptual mistake.  And that's 
just awful.

A plague on their houses, Fodor would submit--but don't take him literally!


Cheers,
Budd





=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: