[Wittrs] Re: The CRA in Symbolic Form (According to Joe)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 26 Apr 2010 16:15:51 -0000

Well you've certainly gone to a lot of trouble so I respect you for that. I 
will address this in what I take to be the relevant parts only because of how 
long this threatens to become. Note that the issue that seems to be the real 
problem here lies in some of your terminology. This will hopefully be made 
clearer by my comments below:

--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote:
>
> Formalization of CRA
>
> E is the existential quantifier
>
> logical relations:
> & conjunction
> v disjunction
> -> conditional (material implication)
>
> P = is a Program that is running
> U = is only using syntactic operations
> M = is a Mind
> S = is/has semantic understanding
> C = is constituting semantic understanding
> G = is causing semantic understanding
> K = is constituting minds
> Q = is causing minds
>


I accept your usages though I question why you think it necessary to introduce 
different terms for "constituting semantic understanding" for instance and 
"constituting minds" since theoretically the same constitutive relation 
obtains. Naming them differently suggests different relations which just happen 
to share elements of the same terminology. Needless to say I believe this can 
be more clearly and simply stated by keeping our terms to a minimum consistent 
with ordinary language. However for now I'll proceed with what you've laid out 
below.


> [1] (x)(Px -> Ux) {1}
>
> a running program is only using syntactic operations
>

This presumably corresponds to the first premise ("programs are syntactical").

> [2] (x)(Mx -> Sx) {2}
>
> whatever is/has a mind is/has semantic understanding.
>

And this the second premise ("minds have semantics").


> [3] (x)(Ux -> (-Cx & -Gx)) {3}
>
> whatever is only using syntactic operations it is neither constituting
> nor causing semantic understanding.
>

The third premise: "Syntax does not constitute and is not sufficient for 
semantics."

Here you have wrung out the prior ambiguity in Searle's terms. However, what 
you give us is stipulative, i.e., we need to know what its basis for being 
deemed true is other than an agreement to accept it for argument's sake.

Searle has told us it is a "conceptual truth", of course, and, as we have seen, 
the non-identity reading does appear to be conceptually true. But there is 
nothing conceptually true about the non-causal reading.

If we take "constitute" as a way of expressing a causal claim rather than 
identity it therefore isn't conceptually true, either. So the old problem 
remains, i.e., the truth of the CRA's conclusion(s) depends on the truth of the 
three premises and we still have no basis for taking the third premise, read 
causally, as true, except insofar as we accept it stipulatively.

Recall there are two issues:

Is the argument valid?

and

Is the conclusion of the argument true?

You say it's valid because Searle doesn't present it equivocally (I disagree 
for the reasons already given) but even if we wring out the equivocal usages of 
the third premise as you do, we still have to deal with the validity question 
if the argument assumes its own conclusion as I maintain it does. So let's go 
on.



> [4] (Ex)(Px & (Cx v Gx)) {4}
>

The fourth statement in the rendering of the CRA we have been working with is 
the conclusion:

Therefore computers can't cause minds (P's cannot cause M's).

So your [4] doesn't correspond with the CRA's #4 (and thus I'm not sure what 
your appended {4} is intended to signify).

My reading of your [4] is I take it you are adding steps here that you want to 
associate with #4 (i.e., a further sub-argument):

"There exists an x such that x is a P and either x 'is constituting semantic 
understanding' or x 'is causing semantic understanding'."


> this is a hypothesis, we take it as a temporary assumption in the hopes
> of deriving a contradiction so that we might negate it to produce the
> desired outcome.
>

> [5] Pa & (Ca v Ga) {4}
>

(I'll continue the English translation):

"a is a P and a either constitutes semantic understanding or a causes semantic 
understanding"

This tells us that whatever a is it fits the description in your #4 above, 
i.e., at a is the x that exists as follows:

"(Ex)(Px & (Cx v Gx))"



> we let 'a' be the something assumed at [5] that is the program that
> either constitutes or causes semantic understanding.
>
> [6] Pa {4}
>
> this just unpacks [5]
>

But 5 isn't given as true so it can support no subsequent claim of truth below 
it.

Note, as well, that the terms "constitutes" and "causes" have still not been 
adequately explicated in this argument. After all, on Searle's own view, what 
constitutes something can be described as its cause (see the wetness of water 
example). Thus far, your version of the argument leaves these relational 
descriptors badly underexplicated.


> [7] Ua {1, 4}
>
> since [1] applies to any x, it applies to a. combining Pa from [6]
> with Pa -> Ua from [1] yields Ua by modus ponens.


Agreed.


>
> [8] Ca v Ga {4}
>
> more unpacking of [5]
>


"a either constitutes semantics or a causes semantics"

But this doesn't tell us what these terms actually mean in this argument.

More, (Ca v Ga) isn't implied by 5 as you suggest. Note that your #5 translates 
this way into English:

"a is a P and a either constitutes semantic understanding or a causes semantic 
understanding". ["Pa & (Ca v Ga)"]

This is an assertion that isn't exhaustive because of the ambiguity left in the 
terms of "constitutes semantic understanding" and "causes semantic 
understanding". Until we know what these terms are intended to denote they are 
just placeholders with some connotative implications in English.

Since you aim to turn this into a completely formal argument you have to define 
all your terms and not rely on connotations or even ordinarly language any 
longer.

By telling us what the difference is between "constitutes semantics" and 
"causes semantics" you will be elaborating and clarifying why they are related 
by the exclusive "or" disjunction (v) and if it is appropriate that they can be 
considered to stand to one another in that relation.


> [9] -(-Ca & -Ga) {4}
>
> it is not the case that both Ca and Ga are false. this follows from
> (Ca v Ga), the proposition at [8]. in fact [8] and [9] are logically
> equivalent. if either Ca or Ga is true; then, it can't possibly be the
> case that they are both false; and, vice versa.
>


Agreed. But the problem remains with either 8 or 9, i.e., the terms are 
underexplicated. It is certainly true that anything fitting into the terms will 
adhere to the relation(s) identified from a logical stanepoing but if we don't 
know what is being so fitted, absent adequate explanation of all the terms 
used, the exercise is empty. Since ambiguities in terms are key to the fallacy 
of equivocation, leaving the ambiguities in place does not resolve the problem. 
The reason is that you can structure an argument according to logical relations 
but, in the end, what the argument asserts (what its terms stand for) are what 
matter. This goes to the question of the adequacy of the CRA as a demonstration 
of the truth of its conclusion. (And I will reiterate that if you aim finally 
to demonstrate the truth of the said conclusion by incorporating a premise or 
premises that already assume the conclusion -- we will see below -- then you 
will have merely moved from one fallacy, equivocation, to another, circularity.)

But let's go on!


> [10] -Ua {3, 4}
>
> the proposition at [9] is the negation of the consequent of the
> proposition at [3]; hence, by modus tollens we derive the negation of
> the antecedent of [3].
>
> [11] (Ua & -Ua) {1, 3, 4}
>
> we have derived a contradiction by combining [7] and [10]. this rests on
> the union of the assumption sets that support [7] and [10].
>
> [12] -(Ex)(Px & (Cx v Gx)) {1, 3}
>
> since we have a logical contradiction we are required to deny one of
> those assumptions upon which the contradiction rests. I will deny the
> 'throw away' assumption taken as a hypothesis at step 4. this assumption
> is deleted from the assumption set on which [12] rests.
>

> thus, there is no program which constitutes or causes semantic
> understanding.
>


Here you are relying, again, on the inadequately explicated terms "constitutes 
semantics" and "causes semantics".

Here is your "step 4": "(Ex)(Px & (Cx v Gx))"

The issue comes down to what it means to say that x constitutes or causes 
semantics which, as noted, you nowhere provide here.

Note that my point has never been to claim that for any x that is a syntactical 
operation, we can rightly claim it constitutes semantics where "constitutes" is 
understood as an identity relation.

However, I have also made the point that "constitutes" can be read as "causes" 
and that Searle even does that elsewhere. But "causation" isn't implied by 
"identity" nor is non-causation implied by non-identity (something you have 
previously agreed to),even if some ordinary language usages allow such a 
connection (see Searle).

You don't tell us how you are using the terms in question. Now note that if one 
considers consciousness (or understanding or semantics) as a function of a 
complex of what we are here calling P, then it is irrelevant whether any P, BY 
ITSELF, is sufficient for S ("is/has semantic understanding" -- see your 
definitions above).

Thus, again, the CRA is an argument about the wrong issue based on a 
pre-existing presumption that consciousness is irreducible.

But is the CRA invalid?

It is if it depends on a fallacy of which equivocation is one. But it isn't the 
only one and if the CRA aims to prove that computers can't cause M because 
their constituents, P, aren't M, and does this by assuming that M must be 
identical with its constituents, then it is circular because it is already 
assuming its conclusion. And that's another, albeit a different, fallacy.


> [13] (x)(-Px v -(Cx v Gx)) {1, 3}
>
> this is logically equivalent to [13].


According to your rendering it is [13] so presumably you have become confused 
by your lengthy reworking here. Not surprising since this kind of game is 
conducive to such confusions as it piles obfuscations onto ordinary language in 
the alleged quest for the elimination of the fuzziness found in ordinary 
language. Wittgenstein was manifestly right to have moved away from it.


> if it is not the case that there
> is something that is a program AND which either constitutes or causes
> semantic understanding; then, for anything whatsoever, one of those
> propositions must be false. either x is not a program (-Px) or it is not
> the case that x either constitutes or causes semantic understanding.
>

As noted, this misses the point because of the failure to explicate the 
relational terms "constitutes semantic understanding" and "causes semantic 
understanding".


> [15] (x)(Px -> -(Cx v Gx)) {1, 3}
>
> since we know for a fact that there is a least one computer programs
> somewhere in the world, it follows by the disjunctive syllogism (given a
> disjunction and the negation of one of the disjuncts, it follows that
> the other disjunct is the case) that -(Cx v Gx) is the case.
>

This continues to miss the point, i.e., that causes does not rely on identity 
(being constitutive of) at a basic level. That is, suppose we take an 
automobile and fill it with gas and connect all the wires and valves and 
pistons and turn the key and step on the gas. The various components (all of 
which involve physics) produces the motion of the vehicle as it rolls forward 
in response to the driver's actions. But the motion of the vehicle is not found 
in any of the individual physical elements. It is, of course, a system level 
phenomenon. No one thinks that because the gas or the pistons or the sparks are 
not the motion of the automobile, that none of them have a causal role in that 
automobile.

Similarly, if understanding is a lot of processes running together on a 
particular physical platform, then the failure of any element of that platform 
to have understanding is not a reason to think that the system cannot 
understand. Thus, your argument continues to be built around a denial that Sy 
is Se which no one disputes. The problem with it and with the CRA which you 
take it from, is that it misses the point. It does this by ASSUMING something 
about understanding which its conclusion then claims to demonstrate, i.e., that 
causing understanding depends on being understanding.


> [16] (Ex)(Kx v Qx) {16}
>
> this is another hypothesis or throw away assumption. we assume that
> there is at least one of something that is sufficient for causing or
> constituting a mind. let's call that something 'b'. note that we are not
> assuming either that b is or that b is not a program.
>

At this point I think I have seen enough of what's wrong with your rendering of 
the CRA, i.e., it depends on the same underlying assumption(s) and uses the 
same ambiguity in terms. I don't know if it's useful to proceed with your added 
terms for "constituting minds" ("K") and "causing minds" ("Q"). I think it's a 
mistake to produce new terms for the same relation when the point is to make an 
argument that carries the relations from one set of terms to another. But I'll 
read on a bit further, giving this aspect of your effort the benefit of the 
doubt.

> [17] (Kb v Qb) {16}
>
> unpacking [16].
>
> [18] Mb {16}
>
> if there is something that causes or constitutes a mind; then, there is
> a mind.
>
> [19] Sb {2, 16}
>
> finally, axiom 2 to do some work; and, by combining [2] and [18] (modus
> ponens) we conclude that there is semantic understanding.
>

Never in question as long as we mean by this what humans do when they process 
information. But I know you think it important to show this on the grounds that 
Dennett has questioned the idea that there is a special something in the brains 
called "semantics". For my part, I will stipulate that we have understanding, 
whatever it consists of, and that it is what we mean by saying of anyone that 
they follow what is being said, etc.

> [20] (Cb v Gb) {2, 16}
>
> since b, whatever it is, is sufficient for constituting or causing a
> mind and a mind entails the existence of semantic understanding; then,
> it follows that b is sufficient for causing or constituting semantic
> understanding.
>

No dispute. What is in dispute, still, is what is meant by causing or 
constituting semantics. Certainly we wouldn't want to say brains constitute 
understanding because some brains may still do what brains do but manifest no 
understanding at all. Searle says brains cause understanding and I acccept that 
usage for these discussions, i.e., they are the physical source and proximate 
cause of whatever events understanding consists of. What remains unexplicated 
is what that means, of course. By saying brains cause understanding are we 
asserting that:

1) they do something called "understanding";

2) bring something into existence that was not formerly there and which counts 
as what we mean by "understanding"; or

3) just are, in terms of some part or operation(s), what we mean by 
"understanding".

All of these usages have some problems but my choice would be to settle on #1. 
But in that case "constitutes" does not have the identity sense here but the 
causal sense and, as such, it is not any particular piece of the brain that is 
the understanding but something the brain does. Insofar as it is that the idea 
that understanding is just identical to any piece or single operation of the 
brain is seen to be irrelevant. So an argument pitched to showing that 
understanding isn't any particular instance of syntactical operation on a 
physical platform is similarly irrelevant. Causal does not imply 
constitutiveness qua identity.



> [21] (Kb v Qb) -> (Cb v Gb) {2}
>
> this restates [17] thru [20] as a conditional which allows us to drop
> the assumption on which the [17] rests (known logically as conditional
> proof).
>

I think that from here on, this goes far afield from what we have really been 
discussing, namely whether the third premise of the CRA supports the CRA's 
conclusion about computers running programs and whether it represents an 
equivocation as I've claimed. After all, the existence of understanding and the 
role of brains in producing it, which you go on to address below, is undisputed.

So I'm going to leave what you've got, just for the record, but no longer see a 
point in reading along and commenting on it, step by step, as it doesn't really 
focus on the issues before us and it is very, very time consuming. If you think 
there is anything especially key to your argument in it that I should be 
attending to, just indicate that in your next post and say why and I'll give it 
more attention.

For now, though, I think the above is more than enough to make my point and 
continuing along this lengthily formal rendering beyond the issues we were 
addressing is superfluous. (But I do appreciate the effort you obviously went 
through to put it together.)

SWM


> [22] Pb {2, 22}
>
> let us hypothesis that the 'b' mentioned in [21] is a running program.
> thi is a new assumption added to the one on which [21] rests.
>

> [23] Pb -> -(Cb v Gb) {1, 2, 3}
>
> since [15] applies to anything, it applies to b, so we unpack [15]
>
> [24] -(Cb v Gb) {1, 2, 3, 22}
>

> combining [22] and [23] by modus ponens we conclude that b (assumed to
> be a program) does not cause or constitute a semantic understanding.
>
> [25] -(Kb v Qb) {1, 2, 3, 22}
>

> combining [21] and [24] by modus tollens, we conclude that b (assumed to
> be a program) does not cause or constitute a mind.
>
> [26] (-Kb & -Qb) {1, 2, 3, 22}
>

> if it is not the case that either Kb or Qb is true; then, it must be the
> case that neither Kb nor Qb is true.
>

> [27] Pb -> (-Kb & -Qb) {1, 2, 3}
>
> another step of conditional proof, restating [22] thru [26]. the
> assumption on which the antecedent rests disappears from the assumption
> set on which the conclusion of this step rests.
>

> [28] (x)(Px -> -(Kx v Qx) {1, 2, 3}
>
> since we never said which program 'b' was it could have been any
> program; so, we are entitled to universalize [27]; and, thus we conclude
> that for anything whatsoever: if it is a program, then it does not
> constitute and it does not cause minds.
>

> Q.E.D.
>
> Joe

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: