[Wittrs] Re: Searle's CRA and its Implications

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 15 Mar 2010 22:25:37 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@> wrote:
>
> > --- On Sun, 3/14/10, SWM <wittrsamr@> wrote:
> >
> > > Searle writes in the header of that
> > > article: "A program merely manipulates symbols whereas a
> > > brain attaches meaning to them."
> > >
> > > Well yes, of course, but the question before us is HOW does
> > > the brain do that, not WHETHER it does that and
> >
> > Nobody in 2010 knows how the brain attaches meanings to symbols. However 
> > the CRA illustrates very clearly that the brain cannot accomplish this feat 
> > solely by manipulating symbols according to syntactic rules.
> >
>
> If nobody knows how the brain does it, how can anyone say with such a degree 
> of definitiveness what you have just said?


They will say it because they think that the functional properties of a 
computer (the properties which make the thing a computer in virtue of 
manipulating syntactic strings, that is) are second-order properties.

That's why it is asked of the systems reply guys whether they are speaking of 
functional systems (S/H) or not.  If the whole system (even assuming Searle's 
response is still a mereological fallacy though it isn't, really) is really a 
computational system where the hardware doesn't technically matter (it just is 
robust enough to carry simple to complex computations defined syntactically), 
then Searle just got the systems repliers to acknowledge the abstract nature of 
what they mean by a system.  This is why Searle says matter-of-factly that 
computers aren't machine enough to be candidates for a theory of mind.

OTOH, if the systems repliers are speaking of nonS/H, then they are either 
contradicting themselves or are changing the subject.  Some think Searle got 
some to change their minds on the issue of strong AI such that they now mean 
(or even meant earlier) weak AI.

And some consider weak AI to be the holy grail.  That Searle would argue that 
weak AI is insufficient for a theory of mind doesn't mean that he doesn't think 
weak AI possible.  It is Penrose who thinks that simulating a mind is 
impossible on Godellian grounds.

Anyway, the reason why Searle is called a dualist by some weak AIers even (like 
Dennett) is because they may share Hacker's view of Wittgenstein's criteriology.

If all we are after is behavior (whether of a mind at a system level higher 
than its processes which CAUSE subjectivity), then Searle's proposal will sound 
like we need something extra-behavioral, hence extra physical.

If one allows that the only reason Searle thinks strong AI incoherent (but weak 
AI doable even if not a candidate for a theory of mind) is that it is all about 
functional properties which are one and all second-order properties (though 
relying on first-order properties to run the program), then one will never 
think him a dualist just because critiquing a computational theory of mind.

But it is easy to misinterpret Searle because so many already have and we often 
mindlessly mimick.  Hell, imagine a thesis that mind is a construct having to 
do with memes and parroting.  That's not even a good joke!

How can a joke work as a theory of mind?

Cheers,
Budd



=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: