[Wittrs] Re: Original and derived intentionality

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 03 Nov 2009 03:28:38 -0000

Guess it's time to offer something on this now:

--- In Wittrs@xxxxxxxxxxxxxxx, "iro3isdx" <xznwrjnk-evca@...> wrote:

><snip>

> "Intentionality" is sometimes defined as aboutness, our ability to see
> our representations (descriptions, for example) as being about
> something.
>
> If I say "the cat is on the mat", that is about something.  Namely, it
> is about the cat and the mat.  Whether or not the cat or mat exist does
> not actually affect the aboutness or intentionality of that statement.
> The statement means something to me.
>

What does it mean to be "about" something? This is actually a somewhat 
idiosyncratic locution, typical of philosophers but not necessarily how we use 
the word "about" in ordinary language. Yet it seems there is a connection here 
and your use relies on it.

If you ask me what a story's about, I can give you a synopsis and that would 
usually answer your question. If you ask me what I'm talking about, I might 
point to something, that cat for instance. Nothing unusual here. But there 
doesn't seem to be a use for "aboutness" in ordinary discourse. Is my synoptic 
description of the story or my pointing at the cat an example of something 
called "aboutness"?

The concept of intentionality, understood as this "aboutness" (the feature of 
being about something) in philosophical circles, has to do with the general 
inclination to say that, when we reference anything we have something in mind. 
That is, if someone asks what did you mean, we can explain our meaning by some 
additional descriptive words in some cases, or by pointing at something to show 
it to our interlocutor, etc. Suppose a machine could do this with the facility 
we do. Would that be enough to say the machine has intentionality, too?

Something still seems to be missing as we see below.


>
> If a computer processes the statement "the cat is on the mat", the
> computer only sees that as a string of bits or characters. Thus, to the
> computer, the statement exists only as syntax and as rules programmed
> into the computer.
>
>

This raises the question of what mechanism underlies our being able to 
reference something intentionally, to have it in mind when we mention it, and 
whether what we have is fundamentally different from what the computer has. I 
think, nearby, Josh raises the question of why we should presume so. I think he 
has a point.


> If the computer handles a statement such as "the cat is on the mat" in a
> way that seems appropriate to us, we can describe the computer action as
> if there were intentionality.


"Seems appropriate to us" in a simulated way or in a way that covers all the 
bases that we cover? (What bases do we cover when we think about anything?) 
Could a computer ever conceivably cover all the same bases?

Well, it depends on what it means for us to be intentional in this way, no? Is 
being intentional in this way to have some special state, some special quality 
that machines simply cannot have?


>  Dennett spoke of this as us taking the
> intentional stance.  In taking such an intentional stance, we ascribe
> intentionality to the computer.  This is said to be derived
> intentionality,


I don't recall if Dennett calls it "derived intentionality". Isn't that more 
like a Searle thing? I think Dennett's point is that ascribing intentionality 
is a thing we, as observers, do, a kind of fictional construct which we ascribe 
to various operations and that mindless machine operations can sometimes be 
treated as being as intentional no less appropriately than mindful human 
beings. It depends, on Dennett's view, on what it means to be a mind, to have 
one. If minds are fundamentally machine like, then being intentional as we 
understand that is not qualitatively different for machines and for us. It's 
just a matter of degree.


> since the aboutness of the statement as seen by the
> computer is derived from what we humans see as what the statement is
> about.
>

Yes, that is Searle. The machine is just a tool and all meaning, all semantics 
is read into it by external, intelligent users.

>
> By contrast, we humans are said to have original intentionality, in that
> it comes from within and is not merely a matter of attribution by
> others.
>

What is it that differentiates this "original intentionality" from the 
"derived" kind? The so-called feature, whatever it consists of, comes from 
within the organic machines we are and is not read into us by outside users? 
Well that makes sense. But wait, what is it in us that isn't read into us but 
which we, as outside users of machines, might sometimes read into them?

The real issue here is not that we think, as conscious intelligent beings, we 
have intentionality (so-called "aboutness") but, rather, what THAT amounts to. 
What it consists of. Well, what would that be?


>
> The issue is controversial.  Searle's "Chinese Room" argument was
> intended to demonstrate that something is missing in the
> AI/computationalist model, and that an AI system could not provide
> original intentionality.  There is fairly widespread agreement that
> computationalism fails to explain original intentionality.


Is there? But we don't yet know what this intentionality (of the "original" 
type) actually is. We just know we are aware of what we mean lots of the time, 
even sometimes when we are operating by rote, i.e., we are likely to be at 
least marginally aware. The machine is never aware in any subjective way though 
it will pick up and react to inputs. Can we say it is aware of its inputs? Is 
that a kind of awareness, too? I think Josh might say it is. I'm pretty sure 
that Neil would not.


> The
> Churchlands argue that intentionality is like phlogiston - a theoretical
> term in an antique theory that we should just eliminate from our
> discussions.

Couldn't that be an adequate explanation? Indeed, "intentionality" is a rather 
late linguistic innovation that comes to us via philosophy. Should that make it 
suspect, given that we know philosophy so often wanders off the ordinary 
language reservation?


> Dennett seems to take the position that derived
> intentionality is sufficient, and presumably thinks that original
> intentionality isn't actually original.


I don't think he makes that distinction which I think is best ascribed to 
Searle. But he does suggest that intentionality is imputed to others by others 
and is not to be found anywhere in the organism itself.


> Searle clearly believes that
> intentionality is something missing in computationalist theories of
> mind.

He does. That is the key to his whole approach and why he thinks his Chinese 
Room device doesn't understand Chinese no matter how effective it is in 
translating and responding to Chinese questions. Real understanding of the 
sounds, such as we have when we know a language, is not to be found in the 
Chinese Room. So the question is what does "real understanding" consist of? 
What do human language speakers have that eludes the Chinese Room and 
computational devices specked like it?


> It's my impression that Fodor tends to agree with Searle on that.
>

I don't know enough about Fodor to comment.

>
> That's enough of an introduction, I hope.  So now a thought experiment.
>

>
> The Traffic light controller
>
> Think of a traffic light controller at a busy intersection.  There are
> sensors under the road that send signals to the computer so that it can
> adapt to the traffic conditions.  There are wires connected from the
> traffic light controller to the traffic lights themselves.
>


> To the computer, the signals received are just signals received. They
> have no particular meaning to the computer in the traffic light
> controller.  Likewise, the energy sent by the controller to the traffic
> lights has no particular meaning to the computer in the controller.  The
> controller just follows its programmed instructions that read the
> incoming signals and operate switches to set the outgoing electrical
> energy.
>

Yes, the machine in this case is just a dumb mechanical device even if it is 
capable of much more complex operations than some simpler devices. But dumb as 
it is, on one hand we just might want to say that such a machine is "aware" of 
its inputs (as I've already proposed above) but then that can't be what we mean 
when we speak of entities like ourselves being aware. What does it mean for us 
to be aware, to know that when the green light flashes for us it means go and 
to know that "go" means to start driving again, etc.? If we were robotic 
devices we could be programmed to move forward on the green light and to stop 
on the red. But we would KNOW that that was what we were doing when we did it. 
We would just do it. We wouldn't think about it, wouldn't deliberate before 
doing it, reference some previously learned rules about it, think about it when 
doing it. We would just start and stop.

Well, isn't that more awareness of the green and red lights than a machine 
without such programming has? So it is less awareness than we, as humans, have 
but more than the unprogrammed device that is otherwise like it. Could we just 
say that being aware is a matter of degree then? If so, then must we presume 
there is a qualitative difference between our awareness and that of the 
programmed device?  If not, then maybe it's just quantitative, a matter of 
degree?


>
> We could describe the actions of the controller, as if it understood
> what it was doing.  That would be a matter of us taking the intentional
> stance and would allow us to talk of a derived intentionality.
>
>

But the issue for AI is whether a device can be built that has the kind of 
intentionality we have. But we have not yet figured out what that might be.

> For the engineer, the situation is very different.  The signals from the
> under-the-road sensors are about something, because the engineer
> designed the system so that they would be about the traffic flow. For
> sure, the engineer used off-the-shelf sensors.  But he used them in ways
> that guaranteed that they would produce signals that represented what he
> wanted those signals to represent.


No one would deny that on the philosophical meaning of "intentional", the 
engineer of the device is intentional. The question is what constitutes 
intentionality in a complex system like the engineer? What is going on within 
the engineer's mental life that counts as being intentional?

>  Likewise, to the engineer, the
> outgoing energy from the controller to the lights is about something
> (the operating of the lights), because the engineer designed the system
> so that those signals would be about the operating of the lights.
> Moreover, the engineer periodically comes back to check on the traffic
> light system, to make sure that the incoming and outgoing signals are
> still doing their intended job properly, so are still about the traffic
> and the lights.
>

>
> Human interaction with the world
>

>
> The disagreement I have been having with Stuart in the "Vexing Question"
> thread has been about our relation to the world. Stuart sees us as
> receiving inputs from the world, and computing with them.


I don't know if it has to be "computing". At some level I would say, sure, 
computational activity underlies what our brains do, but since reading Hawkins 
I am inclined to think that the computational activity (the mechanical type on 
and off switching in patterns) may be very deep and not itself the functioning 
part of the consciousness operations. That is, perhaps it is something more 
like the analogical patterning activity Hawkins ascribes to the cortex.


> That's a
> common view, held by AI people and by many philosophers.  But that puts
> people in the same role as the traffic light controller.  The people
> receive meaningless signals, and act on them in a preprogrammed manner.


Now this is the crux, I think. When I accuse Searle and some others of being 
implicitly dualist it comes down to this: whatever the conscious goings on in 
our brains (the parameters of our mental lives), unless we grant that these are 
the product of underlying processes which aren't, themselves, conscious 
(intentional, aware, understanding, etc.), we are stuck in a dualist picture. 
That is, we are supposing that mind is not reducible to anything more basic 
than itself.

But, if that's so, then where does whatever it is we call "mind" come from? If 
brains do it, then at some point the features of mind must come into being in 
the brain as a result of what the brain is doing. And that means the features 
of mind are the result of something that isn't itself already mind. Otherwise 
we're stuck supposing that the features of mind either were always there or 
else that they somehow blinked into existence from some other domain.


> Just as we see the traffic light controller as having only derived
> intentionality, likewise it becomes a mystery as to why the humans
> should have anything more than derived intentionality.  And thus
> original intentionality is seen to be mysterious.
>


That is why it's so critical to say what it is we mean by this 
"intentionality". What does it look like in us? What is going on in our minds 
when we are being intentional (thinking about anything)?


>
> My own view, and the one I have been arguing on that older thread, puts
> people in a rather different position with respect to their relation to
> the world.  I see us as more in the role of the engineers.


Are we then our own "engineers"? But then who is the engineer of the engineer, 
etc.? Here we tip into Cayuse' homunculus problem, don't we?


>  As I see it,
> we don't just receive inputs. We engineer those inputs ourselves to suit
> our own needs.


Consciously or pre-consciously or in some more mechanistic fashion?


>  We may use off-the-shelf (or off-the-DNA) sensors, but
> we control how those sensors are used and thus we control what our
> inputs are about.


How does this become that critical thing you have called "intentionality"?


> Moreover, we monitor our inputs to make sure that they
> are about what we want them to be about,


But we still have no account of what it means to be about something in this 
sense! Is X being about Y a matter of a referential relationship? Can such a 
relationship exist without being grasped by our engineer friend? But what is it 
to grasp (as in "understand") anything? The machine plays X whenever its 
sensors detect Y. X is thus about Y. But the machine lacks awareness of either 
X or Y except insofar as it reacts to it as it has been programmed to. Nor does 
it have a sense (picture?) of what it means to play X whenever Y is detected. 
What is missing here, in this kind of "intentionality" that is presumably not 
missing in our kind?


> and we re-engineer their use,
> as needed, to make sure that they continue to provide input that is
> about what we want it to be about.  And that's why we have original
> intentionality.
>
>
> Regards,
> Neil
>

But we still have no account of what it means to be "about" something. You have 
given us an explanation of "why" we have something (original intentionality) 
without telling us what it is to be intentional in this sense in the first 
place! So your explanation cannot explain what it is to have something that 
remains, itself, mysterious because unexplained.

Just as previously you gave no real account of how "pragmatism" becomes a 
mechanism for being conscious (rather than just being a factor in its 
development and appearance in the world), here you give no real account of what 
it means to be "intentional" when you say "we re-engineer their use, as needed".

What does this last statement mean and how does it provide a mechanism for 
aboutness and what is aboutness anyway?

SWM



=========================================
Manage Your AMR subscription: //www.freelists.org/list/wittrsamr
For all your Wittrs needs: http://ludwig.squarespace.com/wittrslinks/

Other related posts: