[Wittrs] Re: I Experience in Ordinary Language

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 10 Mar 2010 04:39:40 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:


>> Here is what you are missing. Nobody in AI, nobody in computer
>> science, no mathematician - none of them ever assumed that the
>> semantics would be in the CPU. It was always assumed that it would
>> be in the system as a whole, rather than in the CPU.


> That I'm not willing to buy. The reason is that in the target
> article Searle mentions the thesis that in virtue of the program
> alone (Schank) story comprehension may, ex hypothesii, take place
> if the system passes a TT, again, in virtue of the program alone.

It occurs to me that there might be some miscommunication here.  When I
talk of the CPU, I am talking of the processor chip (that  pentium chip,
for example).  However, some people use the term  "CPU" to refer to the
whole box.


> Let me know if the systems reply is no longer meant (or was never
> meant?) to be a thesis of computational functionalism given the
> hardware which is necessary to carry the program.

Most AI folk would say that the semantics would be there in the  data
structures in memory.  That's still part of computational
functionalism.  But, if we look at Searle's Chinese Room argument,  then
Searle, working in that room, is only carrying out the  operations in
the rulebook, and is not himself part of those data  structures.  So
there would be no expectation that Searle would be  aware of the
semantics.

Incidently AI people have long attempted to implement semantics  with
data structures, though not with any great success.


> I thought that Searle was just offering a possible case where the
> TT would be passed without the semantics, refuting the sufficiency
> of the TT.

Searle assumed that the semantics would have to be evident to  whoever
was carrying out the rules in the rulebook.  He had trouble
contemplating the possibility that it could be in data structures  whose
complexity is such that the person carrying out the rules  would not
have the comprehensive overview needed to be able to  grasp how the
semantics worked.


> But I do see the point of the systems reply. I usually write that in
> one sense the system reply is no different from Searle's biological
> naturalism.

To an extent, it is.  Searle says that intentionality comes from the
causal property of the brain.  If the AI system can get the behavior
right, then a good case can be made that it has the appropriate  causal
properties.


> But to the extent that it is a thesis of explicitly computational
> functionalism, then the systems reply makes me confused.

I think Searle overdid the emphasis on the computationalism.  AI people
have particular ideas in mind, with computation on the  inside, and with
sensors and effectors on the outside to communicate  with the physical
world.  The computation does part of the job,  and they see it as the
most important part.

Think of it along the lines of Fodor's reasoning in his  "methodological
solipsism" paper.  There are things to explain that  don't depend on
contact with the physical world (or at least don't  seem to depend on
it).  And that's where the AI folk see computation  doing the heavy
lifting.


> But is it legitimate to speak as if the mind is software to the
> brain's hardware?

You are allowing yourself to be confused by what was never more  than a
very rough analogy.


> I thought Searle was exploding the myth of the computer by noting
> that (ten years after the CRA in an APA address) the notion of
> computation doesn't name an intrinsically physical process.

Or maybe Searle was exploding the myth that Searle knew what he  was
talking about.

Sure, people question whether computers actually compute.  Some say that
a computer is just an electrical appliance, and that  the computation is
in how we interpret what it does.  And some AI  people will explicitly
claim that computation is physical, and what  a computer does really is
computation.  However, nothing important  really hinges on who is right
in such arguments.  Those arguments  are really just word games on what
is the "right" way to talk  about computation.

Incidentally, when philosophising about such things, I tend to favor
the view that the computer is really just an electrical appliance.
However, when teaching a computer science class, I talk about what  the
computer does as if it is actually computing.  I guess you  could say
that I switch from one language game to the other.


> I keep banging my head on whether what Searle means by symbol
> manipulation is not what others mean by symbol manipulation caused
> by hardware.

I would guess that Searle was banging his head on that, too.  But,
again, it depends on which language game you are playing.


> I really appreciate your taking some time to set me straight if what
> I don't buy is something I'm not buying because I can't see that it
> is equivalent with what Searle means by his "biological naturalism"
> in the first place.

In one sense, it is equivalent.  If you can get the behavior right,
then the causal properties must be right, so Searle's biological
naturalism would say that you have the semantics right.  From another
point of view, computers don't look at all like biologically  natural
things.  For that matter, mechanical devices don't look  at all
biologically natural.  But I think that any unnaturalness  will show up
as AI never being able to get the behavior right,  so I suggest Searle
is making the wrong kind of argument.

Regards,
Neil

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: