[Wittrs] Re: I Experience in Ordinary Language

  • From: "gabuddabout" <gabuddabout@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 08 Mar 2010 23:37:44 -0000


--- In WittrsAMR@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> --- On Sat, 3/6/10, SWM <wittrsamr@...> wrote:
>
> > Actually, as I recall, Searle is on record as saying that
> > there would be no basis for suggesting that an alien with
> > what appeared to be green slime in its head (or whatever
> > passes for that) wasn't conscious if it acted in a conscious
> > way. While he presumes there is some physical arrangement
> > that makes consciousness possible, he doesn't take the stand
> > that there can only be one type of arrangement, namely
> > brains or what we take to be brain-like, that can do it. I
> > don't recall the source of this example but it is probably
> > to be found in one of the following:
>
> True, he does not take the stand that there cannot be some other arrangement, 
> and he allows for such speculation. But In _Rediscovery of Mind_ (third 
> chapter, if memory serves) he argues that we ought not to ascribe 
> consciousness solely on behaviorist grounds. He argues as I have: that we 
> should look both at neurological similarity and behavior. With these criteria 
> we can ascribe consciousness not only to our fellow humans but also to some 
> other animals.
>
> -gts


Hi Gordon and Stuart.

We'll get to the idea of "arrangement" below.

First, though, note the following behaviorist criteria for ascribing 
consciousness:

"if it acted in a conscious way," [then there are grounds for ascribing 
consciousness] and Searle says as much.

So, what should follow from what Searle acknowledges is the following:

If a computer could answer questions about a story in virtue of running a 
program, then the behaviorist criteria will be fulfilled and he is obligated to 
see this as a possibility..  He does.


But Searle shows a possible case where there is the relevant behavior without 
the semantics given that programs are spelled out entirely in second-order 
property terms and as such involve a notion of "electrical arrangement" where 
the electricity is funnelled through logic gates such that a program is a 
purely formal affair/arrangement.

I've found that Stuart normally doesn't distinguish arrangements which are 
essentially arrangements of S/H (software running on hardware) from 
arrangements which are a matter of brutish first order causal properties.

If one doesn't distinguish these, then one might be inclined to think that 
Searle is contradicting himself when saying that brains can cause consciousness 
but computers can't--and not only can't but it is incoherent to even ask if a 
formal program can cause anything.  One may be inclined to lump human and 
martian brains with computers.

The systems reply changes the subject and if one is aware of the target article 
from BBS, then one will have no doubt about Searle's position given that he 
explicitly is not arguing against the notion of a system (as complex as is 
necessary) that is akin to PP (parallel processing) except that it must be a 
system that is not fundamentally S/H.  If the system is S/H then you can have 
all the weak AI you want and I'm told it is hard enough.  But it is not the 
sort of thing that could be thought to do anything except simulate this or that.

Why isn't that enough?

Because S/H only works with second-order properties.  The first-order 
properties of the electricity are one thing, the arrangements of the 
electricity through logic gates is another and make for a system of 
second-order properties--else we wouldn't know the first thing about how 
software can run on hardware.

The Hacker problem (the one with PMS).  A bad joke.  But, as P. M. S. Hacker 
has it, the thesis that the brain causes consciousness is senseless.

Searle thinks that is prolly just a result of obsolete Cartesian categories and 
is a symptom of what has been called conceptual dualism.

Searle likens the exploration of how the brain does it to the exploration of 
the germ theory of disease.  First find correlates, then look for causation.

The problem is that the closest one gets is "overwhelming plausibility."  On 
the other hand, with computational functionalism, if it walks like a duck....

Problem (dis)solved for computational functionalists?

Never!

But I could be wrong!


Cheers,
Budd






=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: