[Wittrs] On computation

  • From: Neil Rickert <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 10 Mar 2010 21:32:14 -0600

This is a response to Budd's post in another thread.
http://groups.yahoo.com/group/Wittrs/message/4551

>I always suspected that there were shifting perspectives on the
>part of the functionalists (David Lewis equating functionalism with
>a physicalism, Searle equating functionalism with something too
>abstract to be a candidate for a causal theory of mind, Chalmers
>thinking functionalism leads to epiphenomenalism, Armstrong
>(and Dennett) attempting an ontologically reductive account of
>functionalism as a physicalism with built-in teleology a la the
>intentional stance which involves levels of intentionality serving
>as that original functionalist notion of a level of explanation
>between the brute causal and intentional, the intermediate level
>known as the computational level).

Yes, there are many confused ideas floating around.  Many people
take the Church-Turing thesis to be a characterization of mechanism.
But it isn't that at all; it is a characterization of computation.
That particular conflation of computation with mechanism might go
back to Newell and Simon.  And then many people take physicalism
to imply mechanism - I suspect that's due to how they think of
scientific laws.

>The whole box being spelled out entirely in computational terms
>or not? And if not, then how is the hardware adding anything
>computationally? It is not, maybe? If not, then just what is the
>connection between the computation as a physical process with
>processes that are just brutish causal processes?

An interesting question.  Typically, a computer manufacture will
describe the computer in entirely computational terms.  This will
include a list of machine instructions, and the computational
specifications of what they do.  The chip manufacturers, on the other
hand, will provide chip specifications in entirely electrical terms
(voltage/current levels, timing diagrams showing the sequence of
electrical signals, etc).

My personal view is that computation is abstract.  However, we
represent the abstract symbols as physical states, and the physical
computer is designed so that physical behavior of the machine is
an exact representation of the steps of the abstract computation
that we have in mind.  In effect, we choose to use a computational
description of the computer, and the computer is so designed that
the computational description fits very well.

>Are we both conflating the two while insisting a la functionalism
>upon a distinction at the same time?

If you take computation as abstract, and what happens in the computer
as a physical represention of those abstract operations, then you
can equally say that there is a physical functionalism at work that
serves as an representation of the computational functionalism.
I seem to recall that Chalmers at one time looked at it that way.
Maybe he still does.

>In short, isn't there an intrinsic difference between systems which
>are described in fully functionalist terms like S/H (software on
>hardware) and are thus subject to the symbol-grounding problem,
>on one hand, and nonS/H systems like human and animal brains on
>the other?

I guess that depends on what "intrinsic" means.  I am inclined to
say that nothing physical is intrinsically computational.

As for symbol grounding, that should not matter for computation.
As a mathematician, if I am doing a computation and using symbols
that have a real world meaning, then I have to ignore that real world
meaning while computing.  It's the nature of computation, that it is
a rule following activity, so should not be influenced by real world
meanings.  So maybe "symbol grounding" is not actually a problem.
My inclination is to say that the important problem is one of
symbolizing the ground, not of grounding the symbols.

Pretend that I could make a computational humanoid robot that
can talk about the weather.  It seems to me that the program
for the robot would have to be dealing with operating the sound
making apparatus on that robot, and perhaps with the motors that
control hand gestures.  And, as a result, the robot should be making
sounds that we hear as "sunshine" and "rain".  But I don't see that
the program itself has to be dealing with computations that use
"sunshine" and "rain" as symbols.  Rather the programming of the
robot is dealing with actions of the robot such as allow the robot
to behave in ways appropriate for talk of sunshine and rain.

To say it differently, I think the typical view of AI people tends
to be too simplistic.  To the extent that AI is just a mechanization
of epistemology, maybe epistemology is also too simplistic.

>Searle thought such a shift might be disastrous and lead to
>a sort of hylozoism such that strong AI wouldn't in principle
>be able to distinguish those systems that have minds from, say,
>thermostats which don't--unless we think of intentionality in terms
>of degrees such that thermostats have beliefs in virtue of their
>being describable as performing computations.

I am inclined to say that the problem really goes back to
epistemology, which AI attempts to mechanize.  Epistemology says
that knowledge is "justified true belief", and the AI people take
that as representations.  And they know how to do representations
(create knowledge bases).  Admittedly, epistemology insists that
its representations be intentional.  However, that intentionality
does not seem to actually play any role.  Thus Fodor, in his
"methodological solipsism" paper, says that logic operates on
representations according to the formal properties of those
representations.  So there does not seem to be a need for
intentionality if only the formal properties are used.  I'll add
that I have never been much impressed with epistemology.

 --------

>>Incidently AI people have long attempted to implement semantics
>>with data structures, though not with any great success.

>And didn't he allow that he could (in response to the systems reply)
>internalize the computational data structures (he called them extra
>bits of paper rather off hand but implied they were to be thought of
>as simply more computation, not more hardware) and (ex hypothesii)
>pass a TT yet still not have the semantics?

That's quite unrealistic.  Our ability to internalize computations
is extremely limited.
http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two

 --------

>The heavy lifting being the machine language that Fodor also pointed
>out is subject to the symbol grounding problem.

The machine language only deals with internal operations.  There
isn't a problem with that.  Fodor is probably confused about that,
given that he mentions machine language as an illustration for his
"Language of Thought," and LoT does require semantics for external
things.

>I'm supposing Searle wanted to focus on their heavy lifting which was
>to be a different sort of lifting compared to type-type physicalism.

I'm not sure what he wants.  I think Searle's intuition is that the
kind of AI he was criticizing could not possibly work - and I tend
to agree with him on that.  Searle seems to have good intuitions,
but some of his arguments are weak.

Regards,
<br>Neil
==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: