[Wittrs] Re: Meaning, Intent and Reference (Parsing Fodor?)

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 12 Feb 2010 01:53:47 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "jrstern" <jrstern@...> wrote:

> --- In Wittrs@xxxxxxxxxxxxxxx, "SWM" <SWMirsky@> wrote:
> >
> > Philosophy isn't quantum physics or even physics. It's subject matter is 
> > ideas, getting clear on them and having to resort to complex and arcane 
> > formulations in most cases is diametrically opposed to the idea of making 
> > things clearer, I think.
>
> If the world is complex, how could philosophy of the world be simple?
>
> And, the world *is* complex.
>


I meant straight forward as opposed to merely simple but I agree that it isn't 
always possible to state ideas in the simplest of terms. Some things we want to 
say about the world, some ideas are, indeed, complex and require complex 
formulations. I am not arguing against complexity in explication per se but I 
think there is a way to go about it that makes things clearer and another way 
that doesn't.

Edelman and Hawkins are good examples here (though they're not writing 
philosophy). I think both make good points and have important insights to 
convey. Both also are roughly speaking on the same side. They are anti-AI when 
it comes to how brains work (both cite Searle's CRA approvingly) and have 
similar points to make, e.g., that brains don't work like computers and 
therefore one cannot expect computers to successfully replicate the outcome of 
brain that we're interested in in the present context: a conscious mind.

For various reasons (stated elsewhere) I think both are wrong, but Hawkins is 
both clearer and makes a stronger case which, it seems to me, is a function of 
that greater clarity (helped along, presumably, by his enlistment of a 
co-author for his book).

In the end, one is left with a hodge-podge of verbiage from Edelman who, for 
all his good ideas, confuses certain things he presents and fails to present a 
clearly described thesis. At the end of the two books about how brains make 
minds (the two I read), one is still left with that question unanswered for all 
the good ideas he has thrown out.

It's complexity he says, manifested in brain morphology and driven by 
biological complexity at the genomic level which is way more complex than the 
binary calculus computers rely on. Thus, he asserts, computers can't ever be 
complex enough (think of the difference between a jungle and a power plant, he 
suggests). He tells us brains operate as they do to produce consciousness based 
on this complexity to achieve an ill-defined re-entrant process (I don't recall 
if that's the exact term he used) and that this is the difference-making factor 
between what brains can do vs. computers, though he doesn't tell us how it 
actually makes the difference!

Hawkins, on the other hand, gives us a very precise explanation of how the 
neocortex seems to work (a simple algorithm that causes its constituent neurons 
to blink on or off depending on the input) and how that working could serve to 
capture, preserve and respond to ongoing inputs received through the 
neurological system from our sensory apparatuses. He is precise where Edelman 
is vague.

Further, Both Edelman and Hawkins focus on the way we remember things with 
Edelman pointing out, correctly I think, that memory in humans is nothing like 
memory in computers because a computer's memory must be precise all the time, 
never varying, or errors gum up the works, whereas human memory is dynamic and 
imprecise, a phenomenon of reconstruction everytime so that each memory is new, 
even as it relates to what has gone before. Hawkins makes a similar point but 
goes on to explain how such a memory function could work, namely that memory 
involves input recapitulation from the bottom up and then from the top down 
(where the "top" is understood as the retained global picture) so that what is 
called up each time in an instance of memory is basically a kind of generic 
template, built up from past specific inputs at a progressively more detailed 
level which then matches to the newly incoming inputs at a progressively 
descending level of detail and adjusts the ongoing template that is retained 
based on any changes being received.

In both cases we have a complex picture of the phenomenon of human memory but 
Edelman doesn't provide specifics or work out the kinks. He contents himself 
with a useful insight while Hawkins connects his insight to the overall picture 
he is seeking to develop for us of how intelligence works (i.e., he goes from 
this description of how memory might operate in the neocortex to a description 
how this turns into what we recognize as intelligence).

Hawkins' presentation isn't so much "simple" as precise and straight forward, 
however complex in its details. There's no beating around the bush, in this 
case, no dithering in generalities as one gets with Edleman.

That is what I have in mind when I suggest that philosophy is best done in 
ordinary language. If we can think X then we should say what we're thinking in 
understandable terms rather than relying on vagueness, generality or neologisms.

Whether Fodor is doing that is something I haven't determined to my own 
satisfaction yet. I only note that being obscure and hard to get are not 
generally indicators of the presence of complete or fully satisfying ideas.     


> > > > But clearly just manipulating zeroes and ones in a computer via an 
> > > > algorithm isn't understanding.
> > >
> > > That's not clear to me.
> > >
> > > > With Dennett I would argue that what's needed is a sufficiently complex 
> > > > process-based system operating in a certain way (the way this is 
> > > > physically realized).
> > >
> > > Ones and zeroes can be complex.
> >
> > Yes and it is in the complex deployment of these (certain kinds of 
> > process-based systems) that one can envision achieving the subjectiveness 
> > we associate with having a mind and which we call "consciousness". But my 
> > point is that, by themselves, they are not instances of consciousness or 
> > any of the features we associate with consciousness (in this case the 
> > feature in question being understanding as in grasping meanings).
>

> I just can't grant that point.
>
> It's like saying that little dots of color (aka pixels) do not comprise a 
> picture.
>

They don't unless configured in a certain way.

But then again this may just depend on what we mean by "comprise". If all we 
mean is that they are the constituents of the picture in question then I would 
agree that they do "comprise" it. But if what is meant by "comprise" is to 
constitute what we mean by "a picture", then I would say, no the picture is 
more than the little dots, the pixels, the pigments, and so forth. It's their 
arrangement in a particular way as well as the individual dots themselves.

A molecule of water isn't wet but water (an aggregate of such molecules 
encountered in a certain way under certain conditions) would be. The molecules 
and the pixels are one level of encounter, the picture and the wetness a 
different level even if each phenomenon is describable at both levels.


> Or little scribbles of ink, do not comprise letters and words.
>
> Well, maybe that's true, but if it is true, does it matter?
>


Yes. And no. Depends again on what the question is and how we are using our 
terms.


> The objection is that it takes something outside, to make the dots a picture, 
> or the scribbles into words.


No it's that it takes something else, even if it isn't of the same qualitative 
type. The pixels and the picture are not conceptually the same and play 
different roles even if physically they are the same in the case of a given 
picture. Is an "arrangement" "something outside"?


>  And this is true - that is, something like this, is true.  However, that 
> thing outside would have no picture, no words, without the dots and 
> scribbles.  The dots and scribbles - are not NOTHING.  The perception by that 
> outside agency does not change the dots or scribbles, they are what they are, 
> perceived or not, interpreted or not.  What then?
>


I agree that the dots and scribbles are not nothing. But they are not the whole 
story, either. This gets us to this whole question of reduction I suppose.


> This is all utterly common, and yet also hard to nail down theoretically.
>

I don't really think it's that hard to nail down, at least not conceptually. I 
don't know about theories and such in a case like this though, obviously, 
theories would be at issue in terms of certain kinds of inquiries (the science 
of brains and minds for instance).


> In the absence of a clear position on these matters, I don't think anyone can 
> put together a coherent paragraph on philosophy of mind.
>


I don't agree. Dennett's done a pretty nice (if sometimes longwinded and overly 
polemic) job. So have others. Fodor is a different story -- at least so far for 
me.

I do think though that your point flags another issue we have often alluded to 
here, namely that the Wittgensteinian idea about language and how it works 
renders it a public phenomenon, hinging on a community of speakers following 
shared rules. When we get to the problem of referring to mental "things" 
language does seem to get rarified to the point of breaking down and this, as 
I've suggested in the past, seems to be a function of the non-public nature of 
so many of the mental referents.


> And the only positions I know of are (a) the agent is privileged, has 
> original intentionality, and we have NO idea how that works, or (b) the 
> "aboutness" that makes dots into pictures etc is attributional and "not 
> real", or (c) behaviorial or Wittgensteinian approaches that don't want to 
> know about such mechanics, they just note that they occur and then taxonomize 
> them.  Wittgenstein (and I think most other) reject the idea of photographs 
> being any kind of
> foundation.

I don't follow this point.


>  I think this is incoherent, like Searle's strawman of "syntax not being 
> enough for intelligence".  Well of COURSE syntax alone is not intelligence, 
> but syntax does not even occur alone, and hey, perhaps does not occur at all, 
> in some readings, but if it occurs, it occurs in a physical manner and causal 
> chain of events - and CANNOT be separated out.  This is "the systems reply" 
> writ large.
>

Here we are in agreement though I would have (and have) expressed it 
differently.


>
> > My question was after something a little different. I was hoping you could 
> > provide a summary statement, in ordinary language, that tells us what Fodor 
> > means by his language of thought idea, i.e., what it is he thinks is there 
> > to discover?
>
> Again, I can't imagine what answer would satisfy you.
>

Maybe not. All I was really after was a precise and clearcut restatement of 
Fodor's thesis concerning mental "things" such as thoughts being dependent on a 
"language of thought", i.e., what it is, where it is, what it looks like to us 
(if we could see it), etc.


> You seem to think the complex can be made simple.  If anything, that is 
> exactly what Wittgenstein tries to avoid, in his talk of grammars. 
> (unfortunately, taking the grammatical route, means that you also then 
> methodologically reject even the complex made complex but rigorously, 
> systematically, you don't want to know)
>


I believe we should always aim for clarity and that the first way to get that 
is to pare away all the confusions, misdirection, inapplicable associations, 
etc., that overlay our many linguistic uses in the various fields of inquiry 
where we build up lots of technical jargons, etc. But that isn't the only way 
and probably not sufficient in all cases (see my point about the difference 
between Edelman and Hawkins).


> "Language is like a tweeting bird!", or something like that, I think Captain 
> Kirk said, was it to confound Norman on Mudd's Planet?
>

I don't recall my early Star Trek. (I actually never much liked it -- The Next 
Generation was much better.) I fear it is too often too easy to take refuge in 
complexity whether it's needed or not rather than to see if something can be 
said clearly and, dare I say it, more simply.


> Clearly, Fodor means something very like a computer language, represented as 
> a computer language is, ultimately, in either or both of symbolic marks on 
> paper, or electrons in circuits, or other
> realizations.

This is what I was hoping for: A "computer language" of brains then? Of course 
the language of programming isn't the language of the computer for it must 
become machine language first for computers do actually implement, right? So is 
Fodor's language of thought the machine language while English is like COBOL 
say?


>  Fodor makes no mentions of neural circuits, if that's what you're asking.


No, or at least not necessarily. But if he is saying that brains function like 
computers then presumably they would have to be mentioned somewhere along the 
line.


>  Due to the whole "multiple realizability" aspect of computation, the exact 
> physical forms should not be important.


Agreed.


>  That *some* physical form is eventually found to correspond, is important, 
> but that's not Fodor's department.  But, just to cause us all pain, Fodor 
> insists that this computer language works only because and when it 
> corresponds in some dual-aspect manner also to innate and preexisting 
> concepts, that represent (eg, mirror) the world.
>


This gives me some further trouble (as you suspected it would). But it does 
sound like he's saying something like Sean is getting at with his "brain 
scripts".


> Now, clearly, if you HAVE something that represents and mirrors the world, 
> that would be handy.


How does he think this happens? Presumably the idea of "representing" and 
"mirroring" is not intended as we might use the terms for the conscious aspect 
of our minds (i.e., that we are aware of representing and mirroring when we are 
doing these things). Presumably he thinks there is a tacit, non-conscious 
one-to-one relation between world object and thought object in the language of 
thought then?


> But there are always problems with such representations, limitations, and so 
> nobody is happy using them as the foundation for an answer.  And nobody much 
> likes Fodor using them as HALF a foundation, either.  So perhaps they are 
> something we use when we can, as we can, so that they might be sufficient but 
> not necessary?  That's about where I am on it.
>
> Josh
>

It sounds like this is the area where, perhaps, Fodor's thinking on this starts 
to break down (i.e., become less clear, less precise, less indicative of ideas 
we can say yea or nay to)?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: