[Wittrs] Re: On computation

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 12 Mar 2010 03:56:42 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

I'll be snipping, and commenting on bits and pieces.

> If Hacker had his way, he would have us believe that the following
> scientific question is incoherent: "There is a way the brain causes
> consciousness."

I never did like that "incoherent" designation, particularly when
applied to something that does seem meaningful.  It might be a  silly
question, and I don't much like talk of the brain causing
consciousness, but I still don't see that as incoherent.

> So I side with Searle when Searle responds to Hacker that some modes
> of philosophy (often inspired by epistemology) are disastrous for
> good science.

Fortunately for science, most physical scientists pay little  attention
to philosophy.

> My natural numbers were hand-delivered via Pythagoras, though
> independently arrived at. I grunted 1, 1, 1..., saw groups, then
> named them 1, 2, 3,..., then thanked at least the Arabs for zero. It
> turns out that it is a very intentional activity and therefore
> bound to be influenced by real world meanings.

Well, certainly, mathematics is a very intentional activity.  But it is
abstract.  Its meanings are not about real world things.  However, I
grant that a lot of mathematics is motivated (and  therefore influenced)
by thought about real world things.

>> My inclination is to say that the important problem is one of
>> symbolizing the ground, not of grounding the symbols.

> That was kinda cute.

It was not intended to be cute.  I am trying to draw attention to the
importance (perhaps "centrality" is a better term) of perception.  It is
our perceptual systems that symbolize the ground (i.e. form  symbolic
representations).  And the importance of this seems to be  overlooked.
Everybody (philosophers and AI folk) want to work with  representations,
but they avoid any attempt to examine how we form  such representations.
But surely, that is the fundamenal question;  namely "how do
representations manage to represent?"  Our perceptual  systems have to
be implementations of the answer to that question.

> But we're damned good at computing and Searle's not as pessimistic
> as, say, Penrose, who thinks weak AI is impossible.

I tend to think that weak AI and strong AI are the same thing (as
implied by the Systems Reply to Searle).  While I won't say that  weak
AI is impossible, I do doubt that it will ever be achieved.

> But we're damned good at computing and Searle's not as pessimistic
> as, say, Penrose, who thinks weak AI is impossible. Maybe that is
> supposed to have something to do with Godel but some (like Howard
> Kahane who wrote _Logic and Philosophy_) don't think that Godel's
> results were as significant as some claim.

I'm not familiar with Kahane, though perhaps my view is similar.  I see
Godel's incompleteness theorem as an highly technical point of  great
interest to mathematical logicians but of no real importance  outside of
mathematical logic.  Evidently, Penrose gives it far  more importance
than I do.

> Especially empiricist epistemology according to Fodor; but most don't
> have a clue as to what he means when he says that sort of thing.

I think Fodor gives you some idea of what he means in his "Language  of
Thought" and his argument for innate concepts.  I disagree with  Fodor
about both, but he does make some important points about  empiricist
epistemology implicitly assuming that there are concepts  without
explaining where the concepts come from.

> And don't forget about the representations themselves (no
> grounding problem for us nonepistemologists but a big problem for
> any machine whose representations are spelled out entirely in a
> machine language).

That there seems to be no grounding problem for us is due to the  fact
that we take perception for granted.

> Fodor, in particular, criticizes Dennett in Fodor's
> _Representations_, a quite longish book with lots of fun chapters
> for those willing to wade through hours of Fodorian thought that
> seem to anticipate Searle's CRA, however weak you may think thought
> experiments--and I don't disagree that anyone can find such weak
> if they like.

I may have skipped some of the chapters, but I have read that book.  As
a general principle, thought experiments rarely prove anything.
However, they can be very useful in illustrating concepts.

>> Our ability to internalize computations is extremely limited.

> Can you please expand on whether Searle is just wrong to insist
> that the point you are making (and which Dennett made too) just
> misses the point?

I'll illustrate with an example from my childhood.  When I was around
10 years old, I came across the Tower of Hanoi.  I think it was in a
"Captain Marvel" comic book, though it might have been "Superman."  I
managed to persuade my father to make me a Tower of Hanoi puzzle  to
play with.  I was already fascinated by mathematics at that age.

I saw the Tower of Hanoi as a recursive problem.  To move n disks  from
peg 1 to peg 2, I had to first move n-1 disks from peg 1 to  peg 3, then
move the next disk from peg 1 to peg 2 (a trivial move),  and then move
the n-1 disks from peg 3 to peg 2.

As I played with it, I became quite good and could do it rather  rapidly
without having to be thinking much about it.  So you might  say that I
had internalized the rules.  However, what I had actually  internalized
was an entirely iterative set of rules (no recursion  at all) which
leads to exactly the same sequence of moves.

This whole idea of internalizing rules is a bit dubious.  Come to think
of it, Searle himself argues against the idea if  internalizing
representations.  You can find that at around page  150 of

> I'm told that it is difficult to make semantic contents
> metaphysically respectable; and who am I to make such a whining
> fuss over it if it's going to look like I'm not hip?!

I don't find anything metaphysical to be respectable.

> I have to point out that these days the reason Searle argues against
> (or doesn't need to argue against) strong AI is that it is incoherent
> given that the idea of computation doesn't name an intrinisically
> physical process.

There's that "incoherent" word again.

Searle is taking the AI people a bit too literally.  If computation  is
abstract and non-physical, as I take it to be, then a computer is  a
highly intricate mechanism, where the details of that intricate
mechanism can be controlled by the electical settings made when  we do
something referred as "running a program".  Looking at it  that way, you
have to understand the AI thesis to be about what is  possible with that
kind of intricate physical mechanism.  All of  the talk of computation
is just a convenient way of discussing it.  But it boils down to
intricate mechanism.  So Searle's argument  that AI is incoherent is
just wrong.

> It's amazing that speech acts themselves are even credible these
> days what with all the incoherence on both sides!

I have only skimmed through Searle's writing on Speech Acts.  However, I
do think that a more realistic way to study language  than what is more
commonly done in philosophy of language.  And,  incidently, I rather
liked Searle's review of Chomskyan linguistics.


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: