[Wittrs] Re: Original and derived intentionality

  • From: "jrstern" <jrstern@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 03 Nov 2009 23:39:54 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@...> wrote:
>
> > I don't see how to cross from intentionality to understanding,
> > I suspect they are different games. That is, I suspect I wish
> > to make them separate games.  Perhaps different levels of the
> > same game - I earlier said that everyone needs some kind of
> > theory of aboutness, in order to discuss mind, so make that
> > in order to discuss understanding.
>
> You could be right but, frankly, I don't see much different
> between intentionality and understanding as I have seen
> the terms frequently used.

They are both used in a variety of ways.  The orthodox W's
here will just smirk and say of course.  I suggest instead we
buckle down and fix the definitions within contexts.


> The understanding Searle seems to have in mind in his
> Chinese Room argument is not the ability to respond with
> apparent intelligence but to know what you're doing when
> you do it. What is that but understanding?

Yes, Searle crosses intentionality with understanding,
leaving both a mess.

> I know, of course, that sometimes by "understanding" we
> mean intelligence. But as Big Blue amply demonstrated,
> a machine can appear to act intelligently but not really be.

This is a very gray area.  The Turing Test says that a machine
can act intelligently, and if it does, that's the end of the
story, the question is dissolved, and we can't talk about
whether it "really" is or not.

I don't like that.


> Of course "intelligence" is also one of those terms with
> a range of meanings. A thermostat is a smarter machine than
> a thermometer after all, because it can do more, and a
> thermostat hooked up to more bells and whistles capable
> of responding to more complex circumstances with more complex
> operations would be more intelligent than a plain old
> thermostat. But none of that is what we have in mind
> when we think about human intelligence, as Searle rightly notes.

When *who* thinks about human intelligence?

I don't think this is anything Searle _rightly_ notes.


> So on that kind of analysis I conclude that the piece
> missing in the Chinese Room is, as others have said here,
> intentionality, that is, thinking about what is being
> asked and responded to.

Well, let me say this about that.

Part of the setup of the Chinese Room is that there is no
intentionality, no (stated) relationship between the algorithm
that computes the responses, and the outside world.

Is this a coherent claim?

I suggest, it is not.  We know (I forget who first published it)
that the "humongous lookup table" can "have a conversation" without
fancy rules or computation, just very simple indexing, and I suppose
is even farther from "real" intelligence than the CR.  But, can such
a thing be realized well enough to satisfy the Turing Test and thus
Searle's parable, or is this an unrealizable gedanken experiment?
If unrealizable, I reject its significance.

Elliding a complex argument, I will assert it is not realizable,
and neither is the CR, actually - UNLESS it *has* the very
intentionality it is assumed (NOT *CONCLUDED*) that it does not have.

> Thus understanding, in this sense, slides into intentionality,
> even while leaving behind mere cleverness in responses which is
> why I think this could be accounted for via a system of linked
> associations that includes a multi-layer complex of
> representational networks.

If one builds a system that exhibits "real" understanding it will
certainly have the complex and multi-layer associations you refer
to.  However, I do not think it is accurate to say it will work
because of the complexity, it will work because of the underlying
computational mechanisms which are capable of realizing your system
and a large but finite number of similar systems, large classes of
which might be indistinguishable from each other without internal
inspection.


> Sometimes they proliferate on their own or at least
> independently of our preferences.

To the detriment of productive discussion.


> It's no good agreeing it's a bag of stuff
> unless we have a kind of inventory.

Turing Test is a black-box kind of thing, we don't
know what's in it, just behaviors.  Sometimes that's
acceptable.

> Again, I'm not getting it! How does one clast an icon?
> What do I need to do to follow you here?

"Iconoclast" is breaking (clast) of icons.


> So a stick isn't an agent but a computer and a person are?

Right.  Per my personal authority.


>What about a closed environmental system like a fish tank
> complete with its own filter, circulator, fish, plant life, etc.?

No.


> Is this kind of physical system agential?
> If not, what's the difference making factor(s)?

The mappability of states to conditions.


> > There is a particular car, and a particular dent that "belongs"
> > to the physical details of the car.  That's my deflation of
> > "subjective".
>
> That's too deflationary for me.

Clast one icon.


> > Making it hostage to deciding what it means
> > to be "first person" other than a physical system, pretty much
> > guarantees you're going around in circles.
>
> So what DOES it mean to be "first person"?

At the moment, my official position is don't know, don't care.

I'm asserting that, in the context of a particular instance of
an agent A, a particular string S "has intentionality", and I'm
sure the string is not an agent, or a person, and I cannot *assume*
the agent is a "person" since that is the issue in question.


> >  There has to be some state change, and I argue
> > that it has to be a clearly mappable state change
>
> "Mappable" to and for whom and to where?

You're familiar with the Searle paint-on-the-wall-is-running-Wordstar
claim?  I'm claiming it is NOT, because you cannot give me the mapping
of the paint molecules to how my PC runs Wordstar - you can claim that
you can do it in principle, but it is not realizable in practice, and
by the laws of constructivism, that is not sufficient.

One could go on at great length about this, but I believe my
claim is sufficiently clear that I can assert it and move on.


> > Because I'm programming computers, even for mundane tasks,
> > and Wittgenstein was not.  The constructivism of computation
> > is something that Wittgenstein did not experience, and did
> > not address.  Turing never really did, either.  Again, I harp
> > on this endlessly, but was Wittgenstein not supposed to be an
> > engineer himself, early on?  Maybe it never really "took" with
> > him, I've seen the like in some people I know.  Turing had
> > his experiences in building Enigma machines, but those were
> > not the general purpose computers of his theorizing.  The
> > ordinary experiences and practices of computation, simply put
> > a different light on all of these discussions.
>
> I don't follow this part.

Neither did Wittgenstein, but I suppose he had
some excuses.


Josh


=========================================
Manage Your AMR subscription: //www.freelists.org/list/wittrsamr
For all your Wittrs needs: http://ludwig.squarespace.com/wittrslinks/

Other related posts: