[Wittrs] Re: Searle's Revised Argument -- We're not in Syntax anymore!

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 24 May 2010 18:16:49 -0000

Searle's new argument as he presents it in the APA address recently referenced 
by Gordon. (My comments follow the argument, below.)

http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

This brief argument has a simple logical structure and I will lay it out:

1) On the standard textbook definition, computation is defined syntactically in 
terms of symbol manipulation.

2) But syntax and symbols are not defined in terms of physics. Though symbol 
tokens are always physical tokens, "symbol" and "same symbol" are not defined 
in terms of physical features. Syntax, in short, is not intrinsic to physics.

3) This has the consequence that computation is not discovered in the physics, 
it is assigned to it. Certain physical phenomena are assigned or used or 
programmed or interpreted syntactically. Syntax and symbols are observer 
relative.

4) It follows that you could not discover that the brain or anything else was 
intrinsically a digital computer, although you could assign a computational 
interpretation to it as you could to anything else. The point is not that the 
claim "The brain is a digital computer" is false. Rather it does not get up to 
the level of falsehood. It does not have a clear sense. You will have 
misunderstood my account if you think that I am arguing that it is simply false 
that the brain is a digital computer. The question "Is the brain a digital 
computer?" is as ill defined as the questions "Is it an abacus?", "Is it a 
book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"

5) Some physical systems facilitate the computational use much better than 
others. That is why we build, program, and use them. In such cases we are the 
homunculus in the system interpreting the physics in both syntactical and 
semantic terms.

6) But the causal explanations we then give do not cite causal properties 
different from the physics of the implementation and the intentionality of the 
homunculus.

7) The standard, though tacit, way out of this is to commit the homunculus 
fallacy. The humunculus fallacy is endemic to computational models of cognition 
and cannot be removed by the standard recursive decomposition arguments. They 
are addressed to a different question.

8) We cannot avoid the foregoing results by supposing that the brain is doing 
"information processing". The brain, as far as its intrinsic operations are 
concerned, does no information processing. It is a specific biological organ 
and its specific neurobiological processes cause specific forms of 
intentionality. In the brain, intrinsically, there are neurobiological 
processes and sometimes they cause consciousness. But that is the end of the 
story.\**

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

My Comments:

1 & 2 hinge on a definitional claim, i.e., that textbooks say this is how we 
define "computation". Recall that the argument for what Searle calls "Strong 
AI" is not about definitions, however but about what computers, as actual 
machines, can be brought to do.

3 tells us that, because something is not in accord with a particular 
definition (though we know definitions may vary, both by source and in terms of 
the use context) we should draw a conclusion about an object that is sometimes 
defined in the way Searle reports the definition. In essence he is asking us to 
commit to the definition he has invoked without giving a reason why we need to 
do that.

In item 4 Searle argues that the failure of the definition he has focused on to 
work in a particular case is evidence of a failure of the thing defined to 
operate in a certain way. Yet this assumes that, just because the definition he 
has fixed on seems to break down here, there are consequences for the thing 
being defined (even if, relying on a different definition, those consequences 
would not be seen to obtain, i.e., a physicalist account would not be subject 
to the same problem).

In item 5 he is right to note that by "computer" we don't mean anything that 
can be called that by some expansion of the term's meaning. However, he seems 
to be confusing the homunculus issue when he says we need something along these 
lines, i.e., an observer/user, to make a system a "computer". In fact, we don't 
need anything of the sort for brains so why presume we need that for a computer 
engaged in doing whatever it is brains do?

His point in item 6 is really because, when considering computers qua brains, 
we are not interested in computation qua computation (as a process aimed at 
arriving at a calculation for some purpose to which we mean to put that 
calculation to) but in computation as processing in the way a brain might 
operate, i.e., implemented computuational processes! He errs every time he 
mixes up the activity of calculating with the activity of information 
processing as performed by the physical elements of brains AND computers.

In 7 he is once again confusing levels of description. A user/observer is 
relevant to a computer as a tool but the brain is not its user/boserver's tool 
but the source or medium of its user's existence.

Finally in 8 we again find him slipping meanings. If by "information 
processing" he only means whatever it is people do with computational tools 
like computers and calculators, then he's right. But that isn't what we mean by 
"information processing" when speaking about brains! His assertion that the 
biological nature of the brain is paramount is being assumed here, not 
demonstrated, as is his assumption as to what "intentionality" is. This again 
points up his inherently mysterian concept of mind, a concept that while 
explicitly denying dualism finally hinges on it because of a commitment to a 
fundamental irreducibility of mind to its constituents, even while he persists 
in agreeing that brains do what he wants to say computers can never do.

SWM



=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: