[Wittrs] Re: On computation

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Thu, 11 Mar 2010 23:59:30 -0000

Great responses, Neil.  I'll snip and focus on a couple below.

Before doing that, I'll say that the likes of Fodor and Searle share your view 
of the relative unimportance of epistemology.  It was thanks to Wittgenstein 
that a way (Searle, Fodor, Block, to name a few) opened up for a type of 
philosophy Wittgenstein wouldn't have liked.  Ironic but true; philosophy is 
not to be too sharply distinguished from science.

For a contrary view involving chocolate on my account, just imagine P. M. S. 
Hacker implicating how to do science when trusting that philosophy is a 
necessarily separate discipline. If Hacker had his way, he would have us 
believe that the following scientific question is incoherent: "There is a way 
the brain causes consciousness."

This is a very beautiful example of the excesses of one on a severe diet.  He's 
got all that chocolate under his nose and insists he's being true to some 
outmoded Wittgensteinian austerity nobody believes in anymore excepting 
(oops!), certain gluttons who can supposedly stomach just about any mish mash 
of words since idioms will be idioms.

So I side with Searle when Searle responds to Hacker that some modes of 
philosophy (often inspired by epistemology) are disastrous for good science.

[snip]

> Yes, there are many confused ideas floating around.  Many people
> take the Church-Turing thesis to be a characterization of mechanism.
> But it isn't that at all; it is a characterization of computation.
> That particular conflation of computation with mechanism might go
> back to Newell and Simon.  And then many people take physicalism
> to imply mechanism - I suspect that's due to how they think of
> scientific laws.

I agree.


> >In short, isn't there an intrinsic difference between systems which
> >are described in fully functionalist terms like S/H (software on
> >hardware) and are thus subject to the symbol-grounding problem,
> >on one hand, and nonS/H systems like human and animal brains on
> >the other?
>
> I guess that depends on what "intrinsic" means.  I am inclined to
> say that nothing physical is intrinsically computational.


And that is Searle's "new argument" and yet he still thinks he gets an 'A' for 
his CRA even though you find that argument weak.  It was a thought experiment 
after all designed to get AI theorists to cough up exactly what they meant.  He 
got some to think about it and some described their view as not one of strong 
AI but there was the flip-flopping.  Later, you repeat a Dennettian reply.  
I'll get to that shortly.


>
> As for symbol grounding, that should not matter for computation.

Agreed.

> As a mathematician, if I am doing a computation and using symbols
> that have a real world meaning, then I have to ignore that real world
> meaning while computing.  It's the nature of computation, that it is
> a rule following activity, so should not be influenced by real world
> meanings.


Agreed, up to a point.  My natural numbers were hand-delivered via Pythagoras, 
though independently arrived at.  I grunted 1, 1, 1..., saw groups, then named 
them 1, 2, 3,..., then thanked at least the Arabs for zero.  It turns out that 
it is a very intentional activity and therefore bound to be influenced by real 
world meanings.  But I agree that computing, say, what it sounds like to play a 
polyrhythm like 5:3 involves finding the LCM such that the LCM isn't going to 
be found anywhere in physical space, though perhaps relies on physical space 
given the original intentionality.  So I can't, er, don't pretend to believe in 
some "world 3" a la either Popper or Penrose.



> So maybe "symbol grounding" is not actually a problem.
> My inclination is to say that the important problem is one of
> symbolizing the ground, not of grounding the symbols.


That was kinda cute.  There is absolutely no symbol-grounding problem for any 
machine not designed to overcome it!  And there is a symbol-grounding that 
takes place (is overcome) given biological systems such as ourselves who mean 
things by what they say.  Also, there is a symbol-grounding problem inherent in 
the very notion of a functional system spelled out in second-order (abstract) 
properties, even if the way the second-order properties are "realized" is 
through a combination of first-order properties of electricity routed through 
logic gates (such routing making for purely second-order properties and is the 
level of description which we are both thinking is abstract).

Symbol-grounding seems to be THE problem for strong AI, not weak AI.  But maybe 
only weak AI was originally meant by those Searle described as subscribing to 
what he dubbed "strong AI."  Stage directions ensue:  switch language games 
anon..

But you're right to point out that symbolizing the ground is something actually 
attempted.  I suppose string theorists attempt it when looking for a smallish 
equation.  And Searle attempts it a la symbolizing the ground floor of speech 
acts Sp and Intentionality Fp.

Side note.  Searle responds to Armstrong by saying that it prolly shouldn't be 
expected that we find just one overall theory of Intentionality.  He allows 
that there might be all sorts of specific problems got at from all sorts of 
angles.  The problem of how the brain causes consciousness/allows for semantics 
Searle thinks ought to be modelled on how we came up with the germ theory of 
disease.  Find correlates of consciousness (NCCs) and then look for causation 
and mechanisms later.  This, of course, will always allow room for a tighter 
theory.  For some, it is a form of mysterianism because it might turn out that 
the real story is simply too complicated to understand for any human being.

But we're damned good at computing and Searle's not as pessimistic as, say, 
Penrose, who thinks weak AI is impossible.  Maybe that is supposed to have 
something to do with Godel but some (like Howard Kahane who wrote _Logic and 
Philosophy_) don't think that Godel's results were as significant as some 
claim.  Anyway, far be it from me to say much here about it.  But I'd listen to 
you all day on the topic.



>
> Pretend that I could make a computational humanoid robot that
> can talk about the weather.  It seems to me that the program
> for the robot would have to be dealing with operating the sound
> making apparatus on that robot, and perhaps with the motors that
> control hand gestures.  And, as a result, the robot should be making
> sounds that we hear as "sunshine" and "rain".  But I don't see that
> the program itself has to be dealing with computations that use
> "sunshine" and "rain" as symbols.  Rather the programming of the
> robot is dealing with actions of the robot such as allow the robot
> to behave in ways appropriate for talk of sunshine and rain.


Well, Searle is not arguing against weak AI, which, as far as I can tell, is 
all that you just described above, right?


> To say it differently, I think the typical view of AI people tends
> to be too simplistic.  To the extent that AI is just a mechanization
> of epistemology, maybe epistemology is also too simplistic.

Especially empiricist epistemology according to Fodor; but most don't have a 
clue as to what he means when he says that sort of thing.  What he means is 
that thoughts are to come first in the order of analysis, not epistemological 
capacities to discriminate via sorting or inferential role, as in what a 
computer may be able to do even without a symbol-grounding problem if that's 
all you are building it for.  Cf. (if you like) Jerry Fodor's "Having Thoughts: 
A Brief Refutation of the Twentieth Century."  With a title like that, how 
could any deep thinker resist reading that!


>
> >Searle thought such a shift might be disastrous and lead to
> >a sort of hylozoism such that strong AI wouldn't in principle
> >be able to distinguish those systems that have minds from, say,
> >thermostats which don't--unless we think of intentionality in terms
> >of degrees such that thermostats have beliefs in virtue of their
> >being describable as performing computations.
>
> I am inclined to say that the problem really goes back to
> epistemology, which AI attempts to mechanize.  Epistemology says
> that knowledge is "justified true belief", and the AI people take
> that as representations.  And they know how to do representations
> (create knowledge bases).  Admittedly, epistemology insists that
> its representations be intentional.  However, that intentionality
> does not seem to actually play any role.  Thus Fodor, in his
> "methodological solipsism" paper, says that logic operates on
> representations according to the formal properties of those
> representations.  So there does not seem to be a need for
> intentionality if only the formal properties are used.


Representations might be a little Janus-faced here.  Many of mine are grounded 
while all of the machine's are not.  And don't forget about the representations 
themselves (no grounding problem for us nonepistemologists but a big problem 
for any machine whose representations are spelled out entirely in a machine 
language).

I'll let you decide if I just made a distinction without a difference and I 
will take any epistemological thumping you see fit to perform!


> I'll add
> that I have never been much impressed with epistemology.
>
>  --------

And that has been a contemporary point of both Searle and Fodor.  Fodor, in 
particular, criticizes Dennett in Fodor's _Representations_, a quite longish 
book with lots of fun chapters for those willing to wade through hours of 
Fodorian thought that seem to anticipate Searle's CRA, however weak you may 
think thought experiments--and I don't disagree that anyone can find such weak 
if they like.


> >>Incidently AI people have long attempted to implement semantics
> >>with data structures, though not with any great success.
>
> >And didn't he allow that he could (in response to the systems reply)
> >internalize the computational data structures (he called them extra
> >bits of paper rather off hand but implied they were to be thought of
> >as simply more computation, not more hardware) and (ex hypothesii)
> >pass a TT yet still not have the semantics?
>
> That's quite unrealistic.  Our ability to internalize computations
> is extremely limited.
> http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two

One can both agree that (and it was Dennett's point too I think) it is 
unrealistic while seeing the point of it anyway via thought experiment.  Can 
you please expand on whether Searle is just wrong to insist that the point you 
are making (and which Dennett made too) just misses the point?


> >The heavy lifting being the machine language that Fodor also pointed
> >out is subject to the symbol grounding problem.
>
> The machine language only deals with internal operations.  There
> isn't a problem with that.

If you are an epistemologist doing weak AI as if it is a Dennettian way of 
doing consciousness research!!


> Fodor is probably confused about that,
> given that he mentions machine language as an illustration for his
> "Language of Thought," and LoT does require semantics for external
> things.

I think he was after mechanisms in the brain that allowed for thought and 
simply asumed, as anyone might who is not some austere epistemologist, the 
intentional nature of semantics that, for us at least, has ceased to be mired 
(except for antirealists and externalists in philosophy of mind, if it should 
even be called that) with any intrinsic symbol-grounding problem.  The 
nonepistemologists, however, are accused of pretending they are over a hump 
they can't prove they've gotten over.  OTOH, others think that weak AI is all 
that is needed to get over the hump of the symbol-grounding problem because, 
well, there never was such a problem and it can be dissolved via linguistic 
analysis and perhaps the Wittgenstein-inspired intentional stance which Searle 
found in Dennett's _Consciousness Explained_ to involve disagreement with 
Searle's second premise of the summary CRA:  that minds have semantic contents.

I'm told that it is difficult to make semantic contents metaphysically 
respectable; and who am I to make such a whining fuss over it if it's going to 
look like I'm not hip?!



> >I'm supposing Searle wanted to focus on their heavy lifting which > >was to 
> >be a different sort of lifting compared to type-type   > > > >physicalism.


> I'm not sure what he wants.


He wants what Rambo wants, of course.  And what scientifically unconfused 
philosophers want:  Acknowledgement (at least!) that (and how!) we can get a 
theory (fallable as it is going to be given that it will at best seem merely 
overwhelmingly plausible) of:

1.  How the brain causes consciousness.

2.  How memory works.

3.  How one can do philosophy in a productive way that is not totally some 
enterprise outside the domain of good science.

4.  How certain arguments over what philosophy is (oopsies!) are fruitless, 
including how arguments over qualia are confused.

5.  How it doesn't make any sense to think one can have a science of society in 
a way that one can have a science of physics.

6.  An understanding of scientific laws and how such laws have nothing to do 
with our power of thinking, except when we have a science of how we think, 
which also wouldn't diminish the amount of thoughts we already had.


> I think Searle's intuition is that the
> kind of AI he was criticizing could not possibly work - and I tend
> to agree with him on that.  Searle seems to have good intuitions,
> but some of his arguments are weak.
>
> Regards,
> <br>Neil


Great responses, Neil, though with the usual salt.  Not that I minded hearing 
your seasonings.  And you are seasoned, it appears.  Not that I wanted to (or 
did or could) swallow everything.  And thanks for the references.  Please add 
some more in the future and let me know of any great papers on what Godel 
thought should be a principled way of showing the continuum hypothesis false 
(side project).

I have to point out that these days the reason Searle argues against (or 
doesn't need to argue against) strong AI is that it is incoherent given that 
the idea of computation doesn't name an intrinisically physical process.  So 
he's not really arguing anymore about it.  He still thinks he gets an 'A' for 
his CRA; but today thinks it is like arguing metaphysical realism:  Since there 
is no coherent thesis of antirealism on offer, Putnam's being simply a bad 
argument, then arguments for metaphysical realism look like they are 
unnecessary, on one hand, and make it appear a matter of choice if argued for, 
on the other..

I suppose that is why Hacker maybe uses the same tactic against Searle and 
simply lays down that Searle's "scientific" question about how the brain causes 
consciousness is itself incoherent.

It's amazing that speech acts themselves are even credible these days what with 
all the incoherence on both sides!  It can remind one of the schizm where 
everyone at one time was condemned to hell.  Yet still, no amount of 
epistemology is going to bypass speech acts and it is these which form the 
foundation of philosophy these days--but didn't they always?

Perhaps Wittgenstein was right.  Philosophy done really well doesn't amount to 
very much compared to the feats of the natural and biological sciences.  I 
don't think Searle would necessarily disagree, even though it is a bit 
difficult to do a little the very best.

Perhaps most would agree.

Has Searle been understood?

You tell me!

And just how in hell was Nietzsche (Mr. "Have I been understood?  Dionysus 
versus the crucified!") supposed to complete western philosophy?  Wasn't it 
Spinoza first who drew an oroboro on a lot of his correspondence?  :-)


Cheers,
Budd










=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: