[Wittrs] Re: Bogus Claim 3: Validity Issue: Where is the Equivocation?

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 28 Apr 2010 10:13:35 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote:

> SWM wrote:
>

>  >An equivocation is when we switch meanings of a term in the process of
>  >the argument and that is precisely what has occurred here.
>

>  >The non-identity reading is conceptually true as Searle claims and we
>  >can readily grant it. But then Searle wants us to use that premise to
>  >support a conclusion about non-causality. The reason it looks
>  >compelling is because we recognize the claim of conceptual truth in the
>  >first way of reading the text. What many of us then miss, however, is
>  >that the meaning of the text shifts in order to get us to the
>  >conclusion because NON-IDENTITY DOES NOT IMPLY NON-CAUSALITY.
>

> the only 'shift' is the one that you have performed before our very
> eyes. you read 'does not constitute semantics' and you shift that to 'is
> not identical to'.
>
> can you show that there is a shift in meaning in the CRA *without* first
> arbitrarily shifting the vocabulary that Searle actually uses?
>

Searle's vocabulary is vague, possibly deliberately so. At least it's not clear 
that it isn't. Unlike your provision of a list of definitions of SOME of your 
terms, he does not provide precise and firm definitions for the terms I've 
cited (nor did you, by the way). Searle does offer various definitions over the 
years, of course, there is a fuzziness to many of them, and a tendency to 
change the meanings over time.

Your reference to the Stanford Encyclopedia of Philosophy's article on 
"constitutive" demonstrates that that term, itself, is highly problematic in 
philosophy. In ordinary English it has a range of meanings, of course, like 
most terms.

Searle, needless to say, expresses his CRA in ordinary English. And he says of 
the third premise (geez I must have made this point a hundred times already but 
you just don't give up denying it) that it is "conceptually true", where the 
only reading that passes that muster is one of non-identity, i.e., that syntax 
is not semantics.

But Searle takes the third premise as grounds for a denial of causality, as in 
computers can't cause consciousness (as brains do), which means he uses the 
third premise in a way that the claim that it is conceptually true doesn't 
support because non-identity does not imply non-causality and the non-causality 
claim is not, itself, conceptually true UNLESS you take the claim of 
non-identity to be tantamount to a non-causality claim (i.e., think that to 
cause X something must already be X -- see below for more on this).

Does Searle realize he has done this? Probably not as I don't think he would be 
deliberately misleading. Has he done this? I think the evidence is pretty 
obvious that he has.


>  >Of course the CR itself is described in terms which make this same
>  >mistake because, as Searle tells us, there is no understanding anywhere
>  >in the CR. And, we agree (or many of us do), there isn't.
>
> I trust that we can stipulate that there are syntactic operations going
> on in the CR.
>

I have already stipulated it for argument's sake. Of course it isn't always 
clear what Searle means by "syntax" but for the sake of this debate I have 
agreed to call the computational processes running on computers "syntax" or 
"syntactical" and have further agreed that by this we mean that this processing 
involves non-intentional, non-comprehending manipulation of symbols according 
to a set of rules that don't, themselves, consist of, or embody, the meanings 
of those symbols.

Of course, I've also noted that in at least one sense, knowing the meaning IS 
knowing how to use something, in this case how to use terms (as in knowing the 
meaning of a word), and that THAT knowing IS a matter of following the rules, 
and so forth. But for the purposes of this discussion I have agreed that our CR 
doesn't know anything like this in the way we humans do and only follows the 
rules mechanically, without any awareness of them, or that they are rules at 
all, or that it is following them -- or that it is doing anything else, for 
that matter. (Rather like MOST of the things brains do, for that matter.)

But clearly, the need for all these caveats points up a lot of difficulties 
with the terms Searle is relying on. I don't think they are Searle's 
difficulties alone but, rather, that they just reflect the typical difficulties 
we hit when we get into the area of discussing these kinds of things at all, 
e.g., understanding, intentionality, knowing, etc.


> the CRT only needs to provide the insight that there is no understanding
> in the CR.
>
> the rest follows.
>

That's the problem. It doesn't. The issue revolves around whether the lack of 
understanding in the CR is a function of the absence of something that is or 
has understanding or whether it's a function of a system that is inadequately 
specked to have it because it doesn't do enough of what's needed to replicate 
what brains do.

THIS continues to be the crux of our difficulty. Some here, like you Joe, are 
just unwilling or unable to entertain the possibility that what we call 
consciousness (including features like understanding) could be conceivable as a 
system-level function rather than something that happens at the level of one or 
more of the constituent elements within the CR. As long as you cannot fathom 
the possibility of a system level explanation of consciousness, the CR looks 
compelling to you. But this appearance of being compelling hinges on this view 
of consciousness that won't allow for a system-level picture of it.

The problem, though, is that when you really think about anything that occurs 
in the universe, there really are no obvious simples in the old Russellian 
metaphysical sense. Everything appears to be a complex (or function of a 
complex) of something else and thus, in a critical sense, everything is a 
system level phenomenon. That is, there are no real basics or simples we can 
point to in our experience.

Whether there really are any simples at all, we don't know. But in the context 
of our ability to understand the world, whatever we look at has the appearance 
of being explainable in terms of other things. Even something that seems as 
basic as gravity (which Chalmers assures us is a basic principle of the 
universe) looks like it is explainable in other terms from an Einsteinian 
perspective, i.e., it can be explained as a function of the space-time 
continuum (as the the outcome of infinite ripples in bent space -- though this 
is admittedly a hard concept to get down).

The only sort(s) of things that don't, thus far, appear to be the kind of thing 
that is reducible to something other than itself are things like mind, as 
dualists want to conceive it, the deity as theists want to conceive it, spirits 
as Liebniz might have seen them (in a monadic sense), etc. Now it is at least 
possible that the world might really be this way (have such things among its 
constituents) but at least for now science seems to be telling us otherwise and 
science has been remarkably successful in learning about and manipulating the 
world -- far more so, in fact, than religion or metaphysical philosophy.

So the question is whether we look to a similarly scientific account of 
consciousness (in terms of the operations of the physical platform we call 
brains) or we hold out for something that demands a different picture of the 
universe than science now gives us.

Of course, I know that there are those, like you, I gather, who think that a 
scientific account of brains doesn't preclude a dualist account of minds 
(falling back on arcane metaphysical theories like certain versions of 
"property dualism" i.e., two uniquely different and irreducible-to-one-another 
properties belonging to one underlying thing and so forth), but I am suggesting 
that such an account is, at least at this point, unsupported by anything 
science currently says about the world (though that doesn't preclude our 
discovering information in the future that might change this). Absent reason to 
go further than science now warrants, however, it is, on the view I have been 
presenting, a violation of Occam's Razor (and quite unnecessary) to hold out 
for a dualist conception of mind.

However, insofar as one cannot imagine mind in any other way, I guess one would 
feel compelled to keep trying. On the other hand, I would humbly suggest that 
the problem, finally, is traceable to a failure of imagination and not to a 
failure of a non-dualist account such as Dennett's.


> given that there are syntactic operations going on in the chinese room:
>
> [1] if you hypothesize that syntactic operations are identical to
> understanding, the absence of understanding in the CR refutes that
> hypothesis.
>

The CR is inadequately specked (see Dennett's point in that text we read on 
this list), so the absence of understanding in the CR is not the result of the 
failure of syntactic operations to understand (or to be understanding or to 
have such a property) but of their failure to be adequate to the task because 
of insufficiency in arrangement. Thus, the correction of the problem lies in 
enhancing the system (by adding processes and functions and arranging them in a 
way that matches what brains do), not in finding and adding some missing 
constituent element which is or has understanding!

> [2] if you hypothesize that syntactic operations constitute
> understanding, the absence of understanding in the CR refutes that
> hypothesis.
>

"Constitutes" is still inadequately explicated here. Does it mean "is 
equivalent to" or does it mean "is the stuff of which understanding is made"? 
Both readings are variants of an identity claim, of course. Another possibility 
is Searle's own: that what is one thing at one level is encountered as 
something different at another and thus can be described as causal at its lower 
level of occurrence (as in molecules of H2O cause water's wetness, molecules of 
the table on which my computer currently sits cause that surface's hardness, 
etc.).

If the term is read as causal in this sense, it is perfectly possible to say 
that, just as an aggregate of molecules at the atomic level cause the phenomena 
or features we encounter at our level of operation, so an aggregate of certain 
kinds of information processing can cause (as in "constitute") what we mean by 
"understanding", "consciousness", etc. And in this case the failure of the CR 
to be conscious is not evidence that some R, consisting of the same kinds of 
constituent elements, could not be.

Thus the hypothesis you claim is refuted is not. But, to see that, you have to 
be able to see how consciousness could be explainable as a system-level 
phenomenon. As I have said above, given what we know of the world in scientific 
terms, it looks like everything, at some point, IS susceptible to a system 
level explanation (even the individual processes in the CR itself), i.e., that 
there are no real simples in the universe other than those we insist on 
conceptualizing as such. But if there aren't, then merely thinking that there 
can or might be is not evidence that there are or that we must presume there 
are in any particular context.

In something like the CR, there are just layers upon layers of events and the 
CR's problem is it is insufficiently specked to reach the requisite level at 
which what we call "consciousness" is seen to occur.


> [3] if you hypothesize that syntactic operations cause understanding,
> the absence of understanding in the CR refutes that hypothesis.
>

Again, this is just a failure to see the possibility of a system-level account 
which is rather surprising given all the time I've spent here referencing and 
explaining it. I don't know what to ascribe your persistence in missing this 
to. A blind spot? A desire not to recognize this possibility? A powerful 
commitment to (even a hope of retaining) an explanation of mind that keeps it 
apart from any taint of the physical?

All I can say is that it never fails to surprise me just how committed folks 
sympathetic to a dualist account of mind are to that view!

It is not surprising that the CR, a rote responding device, lacks understanding 
because understanding involves more than rote responding! I will once again 
recall here Peter Brawley's example over on Analytic: Expecting a device like 
the CR to understand anything is like building a bicycle and expecting it to 
soar above the clouds like a jet plane!


> these conclusions are equally true. they rest on the same fact (syntax
> is present), the same insight (understanding is absent) and the same
> logic.


It's a failure in your logic then, because the fact that consciousness may not 
be (and probably isn't) an irreducible is not taken into account. And, if it's 
not, then supposing that the absence of consciousness in the CR is evidence 
that the constituent elements of the CR cannot produce consciousness in any 
other R configuration (one that is more complex, more robust, etc.) is merely 
an exercise in conceptual obstinacy.


>all that changes is the nature of the hypothesized relation
> between syntax and semantics.
>

What you miss is the point of the system-level account. Apparently it's very 
difficult for some to come to grips with it.

> thus there is no support for your claim that the non-constitution claim
> is true but that the non-causation claim is not.
>

The non-identity claim is true (reading non-constitution as non-identity); the 
non-causal claim (reading non-constitution as non-causal) is clearly not. The 
fact that the terms allow for both readings is at the core of the equivocal 
nature of the third premise.


>  >But what Searle, via the CRA, is asking us to assent to is the claim
>  >that, because there is no understanding in the CR as he has given it to
>  >us (as he has specked it), there could be no understanding there (i.e.,
>  >if it were specked more robustly).
>
> as long as it is understood that up-specking the CR does not add
> anything that is not a syntactic operation; then, yes, that's exactly
> what the third axiom means: there is no understanding because syntactic
> operations do not constitute and do not cause understanding.
>

Above your arguments that "syntax" does not "cause" understanding simply 
collapse because of the inadequacy of the claims as you present them (you leave 
out the system-level possibility).

Note that just because no instance of syntax is an instance of semantics 
doesn't mean that some combination of syntax (syntactical operations) cannot 
produce semantics!

The failure of the identity claim to sustain a causal claim is quite clear.


>  >To hold this is to think that the understanding, to be in a system
>  >like the CR, must be there as a part of one or more of its constituent
>  >elements, i.e., that it must be a property of one of its component
>  >processes (operations).
>
>  >This, of course, implies that understanding cannot be understood as a
>  >system level function but only at a basic process level, that is, that
>  >understanding (the proxy for consciousness in this case) cannot be
>  >reduced to anything more basic (and not already understanding) than
>  >itself.
>

> in my proof that the conclusion of the CRA follows from its axioms, you
> will not find any assumption that consciousness is or is not a process
> property;


That's because you have failed to provide sufficient semantics (meaning) to 
some of your terms, e.g., to differentiate between different uses of 
"constitutes" and to therefore recognize the possibilities of ambiguity in that 
and some other terms.


> nor will you find any assumption that consciousness is or is
> not a system property.
>
> Joe

That's because you completely miss the possibility. I don't know if it is a 
blindspot on your part or something else. Whatever it is, though, your "proof" 
only shows that certain logical relations obtain. Once we add missing meanings 
to the terms, those relations no longer completely apply and so the proof 
doesn't succeed as such (at a semantic level).

If all you set out to do was to show that the form of the argument could 
demonstrate the truth of its claim under certain conditions, I would agree (if 
all your terms were adequately explicated). But, as Neil correctly noted (and I 
missed), there is a fundamental contradiction in arguing strictly syntactically 
for a semantic claim that syntax is inadequate to yield semantics.

Anyway, I think what you've written above fully reveals your underlying 
mistake, i.e., you really don't see or won't see that the force of the CRA 
depends on a metaphysical presumption about what mind is that has no 
justification besides the possibility that it could be true. That is, it relies 
on a dualist assumption about mind, that it is irreducible to anything more 
basic, ontologically, than itself, -- that it is, in effect, a kind of simple 
in the universe (or derived of a different simple than the rest of the universe 
which is physical).

As long as you remain wedded to this kind of thinking, the possibility that 
mind IS reducible just seems beyond the pale to you! And so the CRA continues 
to seem compelling. It once looked compelling to me, too, but I was then in the 
idealist camp (although, by the time I encountered Searle's CRA I was already a 
recovering idealist -- witness my unease with the argument even though I 
credited it with being right, initially).

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: