[Wittrs] Causation, Identity, Constitutiveness and Sufficiency

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 26 Apr 2010 18:18:41 -0000

I shall interject, albeit briefly, here:

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> No "non-causation claim" exists in the third premise in the first place.
>

The CRA is about a non-causation claim and the third premise is intended to 
support the CRA as in providing a basis for its conclusion. The only way it can 
do that is to say something about causation in the case at hand.

> A3: "Syntax by itself is neither constitutive of nor sufficient for 
> semantics."
>
> A3 means that no agent can *derive* or *ascertain* or *know* the meaning of a 
> given symbol X from knowledge only of the form "X" or from rules based on 
> that form.
>

You persistently confuse this idea of "agent", Gordon. As Josh once pointed out 
to me, an agent needn't be a subjective entity like ourselves. An agent can be 
anything that has a causal implication for something else (i.e., brings it 
about). However, when you talk here about the man in the room you persistently 
make this a matter of that entity, a clearly subjective agent, being unable to 
find the meaning in the symbols from syntactic rules concerning the symbols 
alone.

This is a confusion because the man in the room isn't a "man" at all in his CR 
capacity. He's a man playing a CPU. That still makes him an agent in Josh's 
broader sense but, in being so, his ability to guess meanings from syntactical 
rules is no longer the issue because that implies that understanding requires 
the presence of a meaning recognizing homunculus which I'm sure Searle would 
not claim is necessary to have understanding in the CR.

So the fact that we subjective agents cannot recognize the meanings in symbols 
from information about their syntax alone isn't relevant. The question, rather, 
is whether such rote symbol manipulation meets the standard we apply to 
instances of our own mental behavior which we recognize as understanding.

And, of course, the answer is it doesn't because understanding things, for us, 
implies a whole complex of connections between the pictures or representations 
we carry of the world. If a computationally system could do that it is 
certainly arguable that it would have what we call understanding in ourselves.

(I guess I wasn't as brief as I planned to be, huh?)


> Syntax does not equal, nor does it SUFFICE FOR semantics, i.e., we cannot use 
> syntax IN PLACE OF semantics.


That is true in the context we are speaking of here (though it is at least 
arguable that part of understanding language is being able to use the syntax of 
the word symbols correctly. So in that other sense we could certainly say that 
knowing the syntax is understanding.


> And this is a general truth about syntax and semantics independent of any 
> considerations about the CRA or AI.
>

That syntax is not semantics (not the same as) is not in dispute. What is in 
dispute is what does it take to "cause" semantics as brains do. The system 
reply holds that it requires the right system and that such a system would have 
to be a more robustly specked "room" than the CR though it is not at all 
inconceivable that it could be achieved using the same constituent elements as 
make up the CR.

> To put A3 yet another way: form does not equal substance, where substance 
> equals meaning.
>

To put my respons another way, no one says it does nor does the system reply 
depend on it doing that.

> And per A1, programs are formal.
>

No, per Searle they are formal though we aren't disputing that for the purposes 
of this argument -- though there is some dispute as to what it means to be 
"formal", after all, Searle does slide into the strange position that programs, 
being formal, lack the capacity to make anything happen in the real world but, 
if so, that's to take no account of the role of the physical platforms on which 
the computers run, namely computers.

Budd thinks Searle finally just means that computers, being mere hardware, are 
simply irrelevant to the computers they run on (on the grounds of multiple 
realizability). But THAT is to confuse the idea of multiple realizability, as 
in any platform with adequate capacity to run the more robustly specked system 
will do, with the notion that certain non-computational (and entirely 
unspecified) machine features must be added to the mix in which case it's no 
longer what Searle calls "Strong AI" which he is opposing.

But that's just silly because capacity always matters in any programmed system 
and because NO ONE in the field of AI thinks we're talking about programs in 
some idealized isolation from the physical platforms on which they run.


> When we say that programs are formal we mean that programs operate on and 
> according to the forms of data, not on or according to the meanings of data. 
> In other words they operate according to syntactic rules.
>

No one disputes that. What are disputed are the confused notions that to demand 
a platform with certain capacity is to depart from the claim of what Searle 
calls "Strong AI" and also that understanding is not conceivable as a system 
level function rather than one that occurs at some basic level, co-existent 
with one of more of the basic constituent processes that the CR system runs on. 
If understanding is found at the system level then the only reason the CR 
doesn't achieve it is it is an inadequate system.

> Because of the nature of their architecture, S/H systems will never know what 
> they're talking about.


That is the mistake, i.e., the one that depends on a predetermined notion of 
understanding that ASSUMES it is irreducible, hence the dualist implication!


> By design, they operate on and according to the forms of symbols, not on or 
> according to the meanings of symbols, and form does not equal or suffice for 
> meaning.
>
> -gts
>


And the proponents of so-called "Strong AI" are saying that by design they can 
be built to operate according to meanings.

That form is not the same as meaning doesn't say anything about the potential 
for a system to have understanding if understanding is the ability to make the 
right kinds of connections.

I once asked you to explicate what you think understanding is if you think my 
account is mistaken. To date you have never answered that request. But perhaps, 
if you would try to do so and have a discussion about this, comparing the 
notion I have been offering with whatever counter notion you think pertinent, 
we could actually get beyond this yes-it-is/no-it-isn't level of engagement.

If you would even attempt to formulate what you think understanding is, we 
could look at it in light of the explanation I've already given and see whose 
account seems more sensible. If mine is more sensible, then there is no reason 
to insist on an account of understanding as a kind of irreducible. If yours is, 
then your insistence will look much stronger.

Why not take a flyer on this and offer an opinion of what it is YOU think 
understanding is (besides simply telling us what you are certain it isn't)?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts:

  • » [Wittrs] Causation, Identity, Constitutiveness and Sufficiency - SWM