[Wittrs] Re: I Experience in Ordinary Language

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 10 Mar 2010 21:42:41 -0000

Thanks so much for your intelligent responses, Neil.  I always suspected that 
there were shifting perspectives on the part of the functionalists (David Lewis 
equating functionalism with a physicalism, Searle equating functionalism with 
something too abstract to be a candidate for a causal theory of mind, Chalmers 
thinking functionalism leads to epiphenomenalism, Armstrong (and Dennett) 
attempting an ontologically reductive account of functionalism as a physicalism 
with built-in teleology a la the intentional stance which involves levels of 
intentionality serving as that original functionalist notion of a level of 
explanation between the brute causal and intentional, the intermediate level 
known as the computational level).

I would like to add some replies below and maybe a question or two or many!

--- In WittrsAMR@xxxxxxxxxxxxxxx, "iro3isdx" <wittrsamr@...> wrote:
> --- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@> wrote:
> >> Here is what you are missing. Nobody in AI, nobody in computer
> >> science, no mathematician - none of them ever assumed that the
> >> semantics would be in the CPU. It was always assumed that it would
> >> be in the system as a whole, rather than in the CPU.
> > That I'm not willing to buy. The reason is that in the target
> > article Searle mentions the thesis that in virtue of the program
> > alone (Schank) story comprehension may, ex hypothesii, take place
> > if the system passes a TT, again, in virtue of the program alone.
> It occurs to me that there might be some miscommunication here.  When I
> talk of the CPU, I am talking of the processor chip (that  pentium chip,
> for example).  However, some people use the term  "CPU" to refer to the
> whole box.

The whole box being spelled out entirely in computational terms or not?  And if 
not, then how is the hardware adding anything computationally?  It is not, 
maybe?  If not, then just what is the connection between the computation as a 
physical process with processes that are just brutish causal processes?  Are we 
both conflating the two while insisting a la functionalism upon a distinction 
at the same time?  In short, isn't there an intrinsic difference between 
systems which are described in fully functionalist terms like S/H (software on 
hardware) and are thus subject to the symbol-grounding problem, on one hand, 
and nonS/H systems like human and animal brains on the other?  I note that 
below you write that there is often a shift in language games such that perhaps 
even nonS/H systems are described as if they were simply a matter of 
potentially equivalent S/H systems.  Searle thought such a shift might be 
disastrous and lead to a sort of hylozoism such that strong AI wouldn't in 
principle be able to distinguish those systems that have minds from, say, 
thermostats which don't--unless we think of intentionality in terms of degrees 
such that thermostats have beliefs in virtue of their being describable as 
performing computations.  But since anything can be given a computational 
description, the notion of a (traditional at least) theory of semantics gets 
lost and nobody knows how to get a traditional one out of meaning similarity.  
Hence the new revisionism which is supposed to walk like a duck but sounds to 
some (like Fodor and Lepore) more like a pig in a poke.

> > Let me know if the systems reply is no longer meant (or was never
> > meant?) to be a thesis of computational functionalism given the
> > hardware which is necessary to carry the program.

> Most AI folk would say that the semantics would be there in the  data
> structures in memory.  That's still part of computational
> functionalism.  But, if we look at Searle's Chinese Room argument,  then
> Searle, working in that room, is only carrying out the  operations in
> the rulebook, and is not himself part of those data  structures.  So
> there would be no expectation that Searle would be  aware of the
> semantics.
> Incidently AI people have long attempted to implement semantics  with
> data structures, though not with any great success.

And didn't he allow that he could (in response to the systems reply) 
internalize the computational data structures (he called them extra bits of 
paper rather off hand but implied they were to be thought of as simply more 
computation, not more hardware) and (ex hypothesii) pass a TT yet still not 
have the semantics?  Isn't there a symbol-grounding problem for both the 
systems reply as well as the robot reply?  Yes and no, I presume, given 
language game shifting?
> > I thought that Searle was just offering a possible case where the
> > TT would be passed without the semantics, refuting the sufficiency
> > of the TT.
> Searle assumed that the semantics would have to be evident to  whoever
> was carrying out the rules in the rulebook.  He had trouble
> contemplating the possibility that it could be in data structures  whose
> complexity is such that the person carrying out the rules  would not
> have the comprehensive overview needed to be able to  grasp how the
> semantics worked.

I agree that putting a humunculus inside a UTM isn't going to sound right for 
starters to the thought experiment.  But the idea of a subsystem which 
understood Chinese in a way that English (the rules were to be spelled out in 
English--and, oh dear, machine language can't be in principle!) was not 
understood perhaps just begs the question whether a complex enough machine 
language (complex syntax) could cause semantic understanding.  This is Searle's 
accusation.  The system reply just begs the question.  Shifting language games, 
the systems reply gets spelled out no differently than Searle's biological or 
other naturalism wherein he argues against strong AI but not against AI OR 
artifactual machines capable of bona fide semantics, even consciousness.  
Another yes and no.  Just how exactly was functionalism to differ (as well as 
be spelled out) from type-type physicalism anyway were it not for the 
computational "level" between the brutish causal one where intentionality is 
"discharged" and the level where it is a system feature of a human, animal or 
artificial brain spelled out in nonS/H terms?

> > But I do see the point of the systems reply. I usually write that in
> > one sense the system reply is no different from Searle's biological
> > naturalism.
> To an extent, it is.  Searle says that intentionality comes from the
> causal property of the brain.  If the AI system can get the behavior
> right, then a good case can be made that it has the appropriate  causal
> properties.

That sounds right.  But behavior produced brutishly is one thing (supposedly!) 
compared to behavior produced by a functional system spelled out in 
intrinsically second-order properties (and so not intrinsically after all but 
you may get my meaning about a distinction here) in computational terms, 
whatever that could mean..  Hence the difficulty in forming ice-cutting 
distinctions here without falling afoul of either ordinary language or what is 
meant literally in some technical vocabulary.

> > But to the extent that it is a thesis of explicitly computational
> > functionalism, then the systems reply makes me confused.
> I think Searle overdid the emphasis on the computationalism.  AI people
> have particular ideas in mind, with computation on the  inside, and with
> sensors and effectors on the outside to communicate  with the physical
> world.  The computation does part of the job,  and they see it as the
> most important part.
> Think of it along the lines of Fodor's reasoning in his  "methodological
> solipsism" paper.  There are things to explain that  don't depend on
> contact with the physical world (or at least don't  seem to depend on
> it).  And that's where the AI folk see computation  doing the heavy
> lifting.

The heavy lifting being the machine language that Fodor also pointed out is 
subject to the symbol grounding problem.  Your offering above sounds just like 
the thesis of strong AI as spelled out by Searle after all.  To the extent that 
there is other lifting, is it in terms of more computation or more complex 
hardware...?  I'm supposing Searle wanted to focus on their heavy lifting which 
was to be a different sort of lifting compared to type-type physicalism.  I 
sense another language game shift such that functionalism is janus-faced 
between physicalism and some sort of view of the mental as spelled out in 
computational terms which are also Janus-faced given that by computation one 
might mean a description of events such that particle physicists might do all 
their work by doing a new kind of science of computation as in Wolfram's new 
kind of science (big book!).

> > But is it legitimate to speak as if the mind is software to the
> > brain's hardware?
> You are allowing yourself to be confused by what was never more  than a
> very rough analogy.

But some suppose to take, say, computation as if leading to a hylozoism, say.  
Chalmers at least was gracious enough to follow the implications of taking 
functionalism seriously.  It amounts to what Searle also objected to vis a vis 
the systems reply--that it leads to a hylozoism of sorts (enter language game 
shifting so maybe yes maybe no and undoubtably both, so).  But you are right 
that it is perhaps best not to take them seriously even when they attempt to 
sound serious.  That was a sort of joke.  It's just hard to know when to take 
them literally.  Just what are we supposed to let them get away with anyway?  
If someone says that no person ever held the view of strong AI, they would be 
contradicted by some of the claims (expressed as fantasies that might be true) 
in Hofstadter and Dennett's _The Mind's I_.  And then they fabricate a quote 
which actually gets Searle's point wrong and argue against a chimera.  One has 
to read Searle's review of their book carefully, though.
> > I thought Searle was exploding the myth of the computer by noting
> > that (ten years after the CRA in an APA address) the notion of
> > computation doesn't name an intrinsically physical process.
> Or maybe Searle was exploding the myth that Searle knew what he  was
> talking about.

Well, if language games are routinely shifted from "one" to "another,", then it 
is in principle hard to pin down just what is being talked about on the part of 
the computationalists, let alone nowing whether THEY know what they are exactly 
talking about.  I think Searle went to some length toward making them cough up 
just what they had in mind.  If they reply by begging the question while 
shifting between as-if language games, then it is never going to follow that 
(it can be shown from this) they understand anything Searle doesn't.  Or so it 
appears.  Hey, maybe some of the really smart ones could have by now 
demonstrated such clear insight into the matter that even Searle might learn 
something and confess in print where he was mistaken!

> Sure, people question whether computers actually compute.  Some say that
> a computer is just an electrical appliance, and that  the computation is
> in how we interpret what it does.  And some AI  people will explicitly
> claim that computation is physical, and what  a computer does really is
> computation.  However, nothing important  really hinges on who is right
> in such arguments.  Those arguments  are really just word games on what
> is the "right" way to talk  about computation.

I politely disagree.  They are not word games.  On the other hand, maybe it can 
be shown that a certain thesis amounts to word games?  For example,  Hacker 
would accuse Searle of not making sense when Searle offers that it makes 
perfect sense to say that the brain causes consciousness.  If, on the other 
hand, you want to say that those who claim that thermostats have beliefs don't 
really mean what they say, then it may be a word game that is only being played 
by one of the sides.  I don't think Searle is playing word games when writing 
_Speech Acts_ or _Intentionality_.  It is as if one can allow that both sides 
are playing word games only if you already have an ideology where it is all 
word games.  So I disagree that the arguments are word games for both sides.  
Perhaps it seems so from one side though.  So I would agree that you might see 
it that way--but then I would put you on the other side of Searle while 
enormously respectful of how you are handling the discussion here.

> Incidentally, when philosophising about such things, I tend to favor
> the view that the computer is really just an electrical appliance.
> However, when teaching a computer science class, I talk about what  the
> computer does as if it is actually computing.  I guess you  could say
> that I switch from one language game to the other.

Notice that when you are teaching computer science, you need not mention 
anything about that which has intrinsic intentionality.  You are perfectly 
right to shift in the above way.  But that, I think, is obviously not the same 
sort of shifting that goes on when the "heavy lifting" gets spelled out 
brutishly as well as computationally, willy nilly.

So, great responses, Neil.  I also appreciate your response to Stuart:

Stuart wrote:

"On the matter of homeostasis, why should a machine not be built
to operate in a kind of ongoing equilibrium with its environment,
i.e., to react to changes by continued internal readjustments, etc.?"

Neil responded:

"You could do that.  But it would only adjust for the kind of changes  in the 
environment that you program it for.  And that means you  have to program in 
lots of innate knowledge.  I doubt that you  would get consciousness that way."

Sounds exactly right to me!


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: