[Wittrs] Re: Constituting Subjectivity

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 18 May 2010 12:36:26 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote:

<snip>

  >>okay; but, isn't that just what you have called irreducibility?
>

>  >No. I've referred to "irreducibility" as the supposition that,
whatever
>  >consciousness is (e.g., features like understanding, imagining, being
>  >aware, etc., -- or an aggregate of them), it cannot be explained as
>  >being the outcome of events or activities or operations more basic than
>  >itself (i.e., not already having the nature of those features). The
>  >molecules of water aren't wet at the atomic level because there is no
>  >wetness as far as we know at that level. But at our level of
>  >observation one of the features of the aggregation of the said
>  >molecules, under certain conditions, is the phenomenon or feature we
>  >know as "wetness".
>

>  >This, by the way, comes straight from Searle, himself, though he uses
>  >it in a different context, making a different point. When it comes to
>  >the CRA, however, he seems to forget about this as a possible
>  >explanation for the occurrence of consciousness in the universe and
>  >seems, instead, to entirely miss the point that a system level
>  >explanation of consciousness, which sees consciousness as a complex of
>  >features operating at a certain level, is not excluded from the realm
>  >of possibility and may, indeed, offer the best explanation of what
>  >consciousness is.
>

> how is this different from Searle's position? surely you can give us
> some reason to think that Budd is wrong to say that you are merely
> advocating Searle's position while denying that you are doing so.
>

As I note above, Searle does not recognize this possibility in relation to what 
he calls the "syntax" of computing, i.e., the argument we know here as the 
"CRA".

By not doing so, he misses the POSSIBILITY that so-called "syntax" 
(computational programs running on computers) COULD produce subjectness in the 
aggregate, even if it doesn't do so in a stand-alone fashion (i.e., each 
instance of such a process neither brings subjectness into existence as a new 
and separate phenomenon nor is it what we mean by subjectness nor does it have 
some property that is subjectness).

The idea that a mind is an outcome or product of an aggregation of syntactical 
operations (sticking with Searle's admittedly rather specialized use of 
"syntax", for argument's sake) of a certain kind (because not every such 
aggregate will be expected to do it anymore than every aggregate of atoms will 
produce wetness), is to note that it is a system level function (phenomenon, 
property, characteristic, feature, etc.).

Searle's CRA does not take this option into account and his denial of it when 
it is presented shows that his position reflects a particular conception of 
mind (which is reaffirmed when Searle tells us that mind is ontologically 
"first person" and so irreducible, thereby confusing the reduction question, 
itself).

After all, Searle could have said, ah yes, the CR is simply the wrong system, 
it isn't specked robustly enough to do what brains do (i.e., cause minds), etc. 
But he doesn't.

Instead he argues that the so-called System Reply doesn't answer his challenge. 
But he can only do that if he denies the system level explanation because, if 
he accepted it, then the System Reply would work.

But he does not reject the system account vis a vis brains (though, again, he 
fails to do more than acknowledge that brains must do it and that it is 
reasonable to assume that some day we may figure out how). Now this puts him in 
contradiction since, if brains do it in a system way, then why shouldn't other 
physical platforms (say computers) do so?

After all, there is no reason to think only a brain can "cause" consciousness 
even if that's all we currently know can do so at this point in our knowledge 
development. If the brain is a physical entity, we know that at least one 
physical entity can cause consciousness. Why not some other entity?

And while Searle acknowledges even this possibility when he says some day 
scientists may, indeed, build artificial brains, he aims to exclude computers 
from this class from the get-go. He initially did so with the CRA but, as Budd 
has reminded us, Searle moved away from it (though without formally recanting 
it) over the years in favor of an argument that says that the thesis he 
opposes, "strong AI", is really about some idea of pure computation and, since 
whatever it is a computer process is doing is only in the mind of the 
programmers and computer users, there is no reality to that (in the sense of it 
having the capacity to cause anything). They are, he assures us, just 
abstractions.

That is, computation is a function of what entities with minds do with symbols, 
so computers, he argues, stepping beyond the CRA, are the wrong kind of thing 
to replicate what brains do because they are just devices for performing 
abstract operations, like pen and paper or a keyboard. What they "do" only has 
the meaning we impute to it.

But this is a worse argument than the CRA because it confuses the idea of tools 
(a thing of whatever nature is whatever we conscious entities do with it) and 
physical stuff (I don't mean "substance" in any esoteric or metaphysical sense 
here, just whatever has the features of what we call physical objects). 
Physical things, physics, causes things to happen, says Searle in this argument 
but symbol manipulation depends on minds to observe and grasp what is done. 
What happens occurs only in those minds, not in the world. So minds must come 
from some physical phenomenon while computational programs running on computers 
(computer processes) aren't physical. As Bud persistently puts it, they're not 
"machine enough".

But this is absurd since no one argues computers aren't physical and no AI 
researcher imagines that a computer program implemented on a computer doesn't 
involve physical events. Therefore the only question is whether THOSE physical 
events can do the same things physical events in brains manage to do. And you 
can't answer THAT question simply by redefining what computers are doing as 
non-physical.

Yes there is a symbolic aspect, a representational aspect, to a computer 
program running on a computer. It involves codes representing algorithmic 
instructions which, when run on the computer, cause certain physical events to 
happen in the machine. But so, too, do brains involve codes (DNA) that produce 
certain physical events and no one (or no one on this list as far as I know) 
disputes that those physical events in brains often result in what we call 
"consciousness".


> what's the difference between the position you just outlined and
> Searle's summarization of his own position:
>
> "To summarize my general position, then, on how brain research can
> proceed in answering the questions that bother us: the brain is an organ
> like any other; it is an organic machine. Consciousness is caused by
> lower-level neuronal processes in the brain and is itself a feature of
> the brain. Because it is a feature that emerges from certain neuronal
> activities, we can think of it as an 'emergent property' of the brain.

> An emergent property of a system is one that is causally explained by
> the behavior of the elements of the system; but, it is not a property of
> any individual elements and it cannot be explained simply as a summation
> of the properties of those elements. The liquidity of water is a good
> example: the behavior of the H20 molecules explains liquidity but the
> individual molecules are not liquid". ["Consciousness as a biological
> problem" in _The Mystery of Consciousness_. 17-18]
>
> Joe


A good choice to take text from his later work, The Mystery of Consciousness 
wherein he introduces, for one of the first times, his new argument that, he 
tells us, supersedes the CRA.

But what it shows is that Searle has a blindspot with regard to the system 
level formulation when it comes to computers. I think he is roughly right in 
his presentation of how brains work (or, at least, I can see no reason to think 
he has it wrong). The problem is that he goes from this to the idea that 
computers can't do what brains can BECAUSE computer processes are (as he puts 
it in the CRA) "syntax" or (in his later argument) abstract (not even "syntax" 
as he puts it in The Mystery of Consciousness).

The CRA has lots of problems, not least that it argues against what he argues 
for in brains without giving us a good reason for treating computers 
differently and thus puts him into contradiction vis a vis the two platforms. I 
suspect he came to see that which is why he produced the later argument, by the 
way.

But his later argument is worse than the CRA when it finally drops the claims 
of the CRA in favor of simply asserting that computational processes running on 
computers (implemented programs) are pure abstractions and so lack any ability 
to cause anything in the world. Of course any program in its notational form or 
in the head of the programmer is, indeed, abstract in a sense. It doesn't have 
any causal power in a purely physical sense. But as the early Searle himself 
noted, when we're talking about "strong AI" we're talking about "implemented 
programs" and they are hardly abstract. They are, indeed, whatever the machines 
implementing them do!

He asks us to think that computers qua machines are different in some logical 
sense from brains, while at the same time acknowledging that a brain is (as you 
remind us above with that quote) an "organic machine". But there is no strong 
reason to disregard the computer's physicality while embracing the physicality 
of the naturally occurring brain and certainly just redefining computer 
operations as "abstract" isn't a good reason. (Note that his later argument 
links up to the earlier in this sense: in both cases he calls computer 
processes abstract though in the CRA he does it by asserting they are "syntax" 
while in the later he does it by denying they are even "syntax" because he 
divorces the symbolic manipulation of "syntax" from the physical processes 
expressing those manipulations.)

Now Edelman makes an argument that being organic imparts to the brain, qua 
machine, a level of complexity that is just unavailable to computers (because 
of the serendipitous nature of what he calls selectionism that is the relevant 
operating factor in brains, while computers are instructional and thus overly 
organized). Personally I don't find Edelman's argument especially compelling 
though it is, at least, empirical in its approach (it can be tested in the 
world).

Searle, on the other hand, wants to discount computers out of the gate on the 
spurious grounds that they aren't like brains in a critical way but then fails 
to show how there is any such difference (a critical way), especially since he 
admits we just don't know how brains actually work (Edelman's theories, and 
Hawkins', notwithstanding, given that they are just theories at this point and 
not backed up by empirical results).

So, in the end, Searle is in contradiction with his own positions when 
comparing brains and computers. He either holds that brains are the right kind 
of physical systems and computers the wrong kind, though he doesn't know what 
kind brains actually are (in which case how can he assert computers are the 
wrong kind?), or he asserts that computers are the wrong kind because the 
important thing they do is merely abstract (not even "syntax"), thereby 
dismissing their physicality which they have in common with brains (just as 
they have in common the idea of being encoded to perform certain functions and 
not others).

In sum, everywhere you turn you find contradictions in Searle's opposition to 
the possibility that computers can produce minds. Of course, he may be right 
(for reasons identified by Edelman or Hawkins or some other thinker down the 
road). But his logical strategy is not a reason for thinking he is right 
(either in its original CRA expression or in his later version).

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: