[Wittrs] Re: Original and derived intentionality

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 04 Nov 2009 03:52:54 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "jrstern" <jrstern@...> wrote:
>
> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:
> >
><snip>
> frankly, I don't see much difference
> > between intentionality and understanding as I have seen
> > the terms frequently used.
>
> They are both used in a variety of ways.  The orthodox W's
> here will just smirk and say of course.  I suggest instead we
> buckle down and fix the definitions within contexts.
>

I think Wittgenstein would have wanted to do that, too, though he might have 
had a different idea than you do as to what counts as fixing a definition 
within a context!


>
> > The understanding Searle seems to have in mind in his
> > Chinese Room argument is not the ability to respond with
> > apparent intelligence but to know what you're doing when
> > you do it. What is that but understanding?
>
> Yes, Searle crosses intentionality with understanding,
> leaving both a mess.
>

No, I think he rightly picks up on the fact that these ideas blend into one 
another. It's not entirely clear, in fact, that we're talking about genuinely 
distinctly different things. In some sense, after all, it looks like all these 
features bleed into each other. But that's probably a function of this 
particular area of reference. The referents aren't like the kind we mostly deal 
with in language. More often than not it's only philosophers, poets and, 
perhaps, religionists who have much interest in discretely referencing mental 
phenomena in some fashion or other and, when they do it, they have somewhat 
different things in mind. The poet and religionist want to evoke, the 
religionist also to create a certain kind of picture within which to subsume 
our idea of the world. Philosophers seem not to be entirely sure, at times, as 
to what their project with regard to this kind of referent is.


> > I know, of course, that sometimes by "understanding" we
> > mean intelligence. But as Big Blue amply demonstrated,
> > a machine can appear to act intelligently but not really be.
>
> This is a very gray area.


Yes, that's the point!


> The Turing Test says that a machine
> can act intelligently, and if it does, that's the end of the
> story, the question is dissolved, and we can't talk about
> whether it "really" is or not.
>

I think Searle rightly showed that at least, on one reading of what we mean by 
intelligence, the Turing Test doesn't meet the standard. But Searle's Chinese 
Room is not about the Turing Test per se but about what we think consciousness 
is.


> I don't like that.
>
>
> > Of course "intelligence" is also one of those terms with
> > a range of meanings. A thermostat is a smarter machine than
> > a thermometer after all, because it can do more, and a
> > thermostat hooked up to more bells and whistles capable
> > of responding to more complex circumstances with more complex
> > operations would be more intelligent than a plain old
> > thermostat. But none of that is what we have in mind
> > when we think about human intelligence, as Searle rightly notes.
>
> When *who* thinks about human intelligence?
>

The rhetorical we. I wasn't including you if you wish exclusion!


> I don't think this is anything Searle _rightly_ notes.
>
>
> > So on that kind of analysis I conclude that the piece
> > missing in the Chinese Room is, as others have said here,
> > intentionality, that is, thinking about what is being
> > asked and responded to.
>
> Well, let me say this about that.
>


Are you channeling Vaughan Meade or JFK?


> Part of the setup of the Chinese Room is that there is no
> intentionality, no (stated) relationship between the algorithm
> that computes the responses, and the outside world.
>
> Is this a coherent claim?
>
> I suggest, it is not.


Yes, he underspecks the room and then purports to draw a conclusion that's 
supposed to be applicable to more robustly specked rooms as well. You can't do 
that. It is one of his more egregious errors I think.


>  We know (I forget who first published it)
> that the "humongous lookup table" can "have a conversation" without
> fancy rules or computation, just very simple indexing, and I suppose
> is even farther from "real" intelligence than the CR.  But, can such
> a thing be realized well enough to satisfy the Turing Test and thus
> Searle's parable, or is this an unrealizable gedanken experiment?
> If unrealizable, I reject its significance.
>

I think that's a relatively weak complaint about the Chinese Room argument. 
Even if technical infeasibility IS a factor, in principle his argument may 
still have a point.

But there's a variant of the infeasibility argument that is significant, i.e., 
if it is just impossible to produce a computationally based system that could 
do enough to achieve consciousness, because we can't put enough of the system 
together in the real world or what we can put together can't match the brain's 
capacity, then one could reach a conclusion that computational consciousness is 
impossible, at least in practice (though perhaps some other technology could 
match brains in a way computationalism cannot).

This does seem to be the argument advanced against computationalism by both 
Edelman and Hawkins, though for diametrically opposite reasons. (Edelman says 
brains are too complex to be matched by anything computational while Hawkins 
says computers are too complex to be able to handle all the information with 
sufficient efficiency to match what brains can do with their simpler and more 
elegant pattern matching algorithm.)

Personally I think the really important flaw of the Chinese Room argument is 
that it must assume what it wants to conclude, namely that consciousness cannot 
be reduced to processes that aren't themselves conscious (a dualist, and thus 
somewhat suspect, presumption that Searle, himself, has been at pains to 
disavow).



> Elliding a complex argument, I will assert it is not realizable,
> and neither is the CR, actually - UNLESS it *has* the very
> intentionality it is assumed (NOT *CONCLUDED*) that it does not have.
>


Yes, I agree that a more robustly specked CR would not be bound by the results 
obtained in Searle's underspecked version. But that is because you and I agree 
(I think!) that consciousness is reducible to constituent processes that are 
not, themselves, conscious. But if one doesn't embrace that view, if one 
insists, instead, that consciousness is some kind of ontological basic, then 
the CR looks compelling indeed. So it comes down to whether one ought to think 
of consciousness in this way or not. Searle, himself, is on record as declaring 
that consciousness has a "first person ontology" while, at the same time, 
insisting that it can be causally explained as the product of brains doing 
certain things. I think Searle fails to square these two positions, to the 
detriment of his argument.


> > Thus understanding, in this sense, slides into intentionality,
> > even while leaving behind mere cleverness in responses which is
> > why I think this could be accounted for via a system of linked
> > associations that includes a multi-layer complex of
> > representational networks.
>
> If one builds a system that exhibits "real" understanding it will
> certainly have the complex and multi-layer associations you refer
> to.  However, I do not think it is accurate to say it will work
> because of the complexity,


I didn't say "it will work because of the complexity" but that it will 
necessarily be complex in order to get a system that will do the job. Note that 
Hawkins thinks that at least intelligence can be achieved by a relatively 
simple algorithm. He could be right about that and, if so, about other features 
we associate with consciousness, too. Thus, contra Edelman, it is not the 
complexity but the particular type of functions being performed that matter. I 
just don't see how we could achieve it computationally without a very high 
degree of operating complexity.


> it will work because of the underlying
> computational mechanisms which are capable of realizing your system
> and a large but finite number of similar systems, large classes of
> which might be indistinguishable from each other without internal
> inspection.
>

I miss your point on this one.

<snip>


<snip>
>
>
> > Is this kind of physical system agential?
> > If not, what's the difference making factor(s)?
>
> The mappability of states to conditions.
>

I don't know what you actually have in mind by this feature. Can you give some 
examples?

>
> > > There is a particular car, and a particular dent that "belongs"
> > > to the physical details of the car.  That's my deflation of
> > > "subjective".
> >
> > That's too deflationary for me.
>
> Clast one icon.
>

Why?

>
> > > Making it hostage to deciding what it means
> > > to be "first person" other than a physical system, pretty much
> > > guarantees you're going around in circles.
> >
> > So what DOES it mean to be "first person"?
>
> At the moment, my official position is don't know, don't care.
>


But you used the term above. If so, you need to say what you mean by it. 
Perhaps we mean different things or neither of us have a clear enough idea. I 
don't think it's enough to simply say "don't know, don't care" in this way.


> I'm asserting that, in the context of a particular instance of
> an agent A, a particular string S "has intentionality", and I'm
> sure the string is not an agent, or a person, and I cannot *assume*
> the agent is a "person" since that is the issue in question.
>

I don't follow.

>
> > >  There has to be some state change, and I argue
> > > that it has to be a clearly mappable state change
> >
> > "Mappable" to and for whom and to where?
>
> You're familiar with the Searle paint-on-the-wall-is-running-Wordstar
> claim?  I'm claiming it is NOT, because you cannot give me the mapping
> of the paint molecules to how my PC runs Wordstar - you can claim that
> you can do it in principle, but it is not realizable in practice, and
> by the laws of constructivism, that is not sufficient.
>

What kind of mapping are you thinking about here?


> One could go on at great length about this, but I believe my
> claim is sufficiently clear that I can assert it and move on.
>

Well perhaps it's clear to you but I am not clear on it. You can assert it and 
move on but that won't help us get any closer to understanding one another in 
this matter.

SWM

<snip>

=========================================
Manage Your AMR subscription: //www.freelists.org/list/wittrsamr
For all your Wittrs needs: http://ludwig.squarespace.com/wittrslinks/

Other related posts: