[Wittrs] Re: some helpful guidelines for reading Wittgenstein's philo...

  • From: "Stuart W. Mirsky" <SWMirsky@xxxxxxx>
  • To: Wittrs@xxxxxxxxxxxxxxx
  • Date: Sun, 16 Aug 2009 03:10:36 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, gprimero <gerardoprim@...> wrote:
>
> (Stuart) No, no, I wasn't proposing they were evidence Gerardo, only
> that they indicated to me that I could do things without being aware
> of what I was doing. This SUGGESTS to me at least the possibility that
> the Freudian idea of the subconscious could also be true.
> (Gerardo) That´s exactly what I meant by "evidence": "this suggests
> that". But my point was that your evidence for "doing things without
> being aware" is NOT evidence for any of the freudian hypotheses. 

And I am agreeing!

>It´s
> like if you say "I´ve thought in Sue and then she called me, and that
> suggests to me that I could have paranormal skills". The premise may
> be true, but it cannot be taken as evidence (not even remote) for the
> conclusion, unless until more plausible explanations (e.g. that it´s
> mere coincidence) are ruled out.
> 

Now that's an interesting point! I went for a walk with my wife this morning 
through a local park we often frequent. For many years my wife and I have kind 
of joked about being mind-linked, there being some sort of telepathy between 
us. It has often been the case, noticed by both of us, that we say things the 
other was "just thinking." Then we often look at each other and make a joke 
about that telepathy thing. (When we were younger I think we actually believed 
there could even be some truth to that.)

On this particular walk we got to a certain point where we could see the end of 
the park (we were on our way back) and my wife asked if I was going to go visit 
my mother. I said I was just thinking that I was going to do that and then 
added, jokingly, there's that telepathy thing again.

She replied 'but I always ask you about whether you're going to see your mother 
when we reach this part of the park'. I looked ahead of us at the familiar 
landmarks and realized that that, of course, was it. The telepathy thing was 
easily explained. Just as our memories are triggered by associations, and work 
by assembling various bits and pieces of smaller recollections, so both our 
brains had been trained by the sights in view at that point in a walk we often 
took to recall the same thing at that point in the process, in this case that 
my mother lived nearby and that now would be a good time to drop in and visit 
her. Voila, the telepathy mystery at last explained (with a nod to Edelman and 
Hawkins).   


> (Gerardo before) That´s exactly what I consider a pseudoexplanation:
> if you try to explain an observable event by postulating an event that
> you know even less of your explananda, then you have no explanation at
> all: you have two events in need of explanation instead of one.
> (Stuart) I think you are missing my point here, Gerardo.
> (Gerardo) I think not. I´m pointing to the pragmatic relation between
> means and ends.
> 

But I wasn't arguing that the involuntary intentional act of grabbing the towel 
bar was evidence for the Freudian hypothesis, only evidence that involuntary 
intentional behavior was possible! Freud's theories are a different question. 


> (Stuart) This isn't about metaphysical realms. However it is
> manifestly true that processes happen in our brains while we are
> thinking, etc. It's strange to take no account of these in explaining
> how it happens that brains are conscious in certain cases. Frankly I
> go with Dennett here and think there is no reason to think that we
> have privileged access to our own minds at all levels.
> (Gerardo) It´s obvious that "we don´t have privileged access to our
> brains" (we don´t know what´s doing our cingulate cortex right now),
> but what does it mean "we don´t have privileged access to our minds"?
> If you´re using "mind" for the cases of private experiences,


I am.


> we do
> have necessarily "privileged access" in those cases (if there´s no
> such privilege, there´s no point in calling them "private
> experiences").
> 

The issue has to do with the full goings on in the mind. For instance, we think 
of ourselves as selves, with unity, continuity and as being uniquely separate 
from the events and internal processes we CAN observe. But just because that's 
how it looks to us, doesn't mean that's how it is. The self could be, as 
Dennett suggests, a composite picture in constant flux, as much a set of 
disparate mental events as the events apparently being observed. THAT is the 
sense in which I was referring as our not having privileged access. That we 
look like a self to ourselves doesn't mean that we really are what we look like.


> (Stuart) After all, either the features we associate with
> consciousness simply pop into being full-blown (which really IS a
> dualist supposition) OR they are composites of more basic features
> which are not themselves conscious.
> (Gerardo) You seem to be treating a feature as a thing.


Linguistically we often use "thing" for far more than physical objects. "That 
thing you do", for instance.


> People (of
> whom we say they´re "conscious", or not) are indeed "composed of more
> basic parts" (e.g. organs, cells, molecules...). But that doesn´t
> imply that "consciousness" (which is not a thing but a nominalization
> of an adjective)


There are many kinds of things we apply the term "thing" to in our language.


> must be either an holistic something "simply pop into
> being full-blown", or the accumulation of some kind of
> "protoconscious" fragments. A dog can bark and run without the need of
> an accumulation of parts that proto-bark or proto-run.
> 

But the dog consists of many parts, all working together of course, but 
separable to varying degrees without bringing the dog to an abrupt end. A 
brain, in this case, is like a dog. There is also no evidence that anyone can 
think or talk or reason or behave without a brain.   


> (Gerardo) You must clarify with Glen the usages of "mental" and
> "behavioral". You have at least two senses of each term: mental as
> privately experienced event, mental as non-experienced mentalist
> construct (i.e. freudian unconscious, multilayered consciousness
> systems), behavioral as overt muscular action, behavioral as
> interactional S-R event (including private occurrences). You´ve
> acknowledged that "perceptual response" might be considered "behavior"
> in one sense of the term, and that there´re differences between
> private experiences and non-experienced mentalist constructs.
> (Stuart) Yes. But I have never suggested that what is mental is
> somehow basic in any ontological sense. It is, on my view, just
> another aspect of the physical. Merely noting that there is a mental
> as well as a physical aspect to things is not dualist though some at
> least want to claim it's "property dualism". Personally I think THAT
> concept is rather a mixed metaphor and needs more explication. As
> Searle explains it, property dualism is just confused substance
> dualism while he, himself, denies being a dualist of any sort himself,
> while often speaking of consciousness as if it were an ontological
> basic in which case he would be a dualist himself, albeit without
> admitting it. Dualism would certainly need to be explored if I were to
> get into this with Glen but he's been a mite testy of late so I'm
> reluctant to open up a new can of worms!
> (Gerardo) OK, but I think that sometimes you mix this concept of
> "mental" with other kinds of concepts, and that´s problematic.


Can you give an example or two?


> I
> propose you to distinguish the following meanings of "mind". M1 is
> "mental as private event", events that can be detected at least by one
> person.


I would say accessed by only one person, the experiencer. That one can detect 
evidence in another that some mental event is going on is a different class of 
detection.


> M1 is the content of episodic mental concepts: perceiving X,
> sensing X, feeling X, having imagery of X, dreaming X, and saying X to
> oneself. M2 is "mental as disposition of overt or private
> behavior" (see that this is not the logical behaviorist proposal of
> overt dispositions, but a functionalist proposal of overt-plus-covert
> dispositions), and includes concepts like being intelligent, knowing
> about X, having a belief, or understanding a sign. M2 is still
> "observable", but in a less direct way than M1: people can detect many
> criteria that support or refute the ascription of the disposition
> (this usually happens very quickly and without the need of reasoning).


Your M2 is more detectable than my M1 because we can have experiences without 
letting on (that is we cannot observe another's private experience though we 
can observe evidences from which we can guess or conclude that something 
private is going on).


> M3 is "mental as speculative constructs", it includes all the imagery
> and conceptualization that are not based on observation, direct or
> indirect, but on the social reinforcement of some ideas: freudian
> unconscious, religious souls, unconscious "mental
> representations" (unlike the so-called "neural representations", which
> are observed physiological events that correlate with other
> variables). Perhaps if we agree with this classification (or we make
> some changes on it, until we get an agreement) we´ll be able to solve
> some misunderstandings. What would you say until here?
> 


I don't recognize the validity of claiming your M3 at all. Perhaps you just 
mean different ways of conceptualizing M1 though?

> (Gerardo before) I guess that "correlated events" would be a better
> name, and much more discriminative. We can detect different kinds of
> events, and then assess different kinds of relationships, without
> obscuring them with the usage of unclear concepts.
> (Stuart) I'm not sure that gets at what I have in mind. At some point
> we have mental pictures which are representations and it's not
> unlikely, given what we know of brains, that there are various signal
> transformations that underlie the mental representations we are aware
> of. In that case, they may best be described as representations, too,
> albeit of a different order. "Events" strikes me as too general here,
> though it may be the case that each such "representation" is also some
> brain event.
> (Gerardo) "Events" is a general term, but the "correlated events"
> would be individuated by their features (e.g. the physiological event
> of the kind X is correlated with the experiential event of the kind
> Y). I´m saying that this kind of language is much more informative.
> The polisemic meaning of "representation" makes people believe that
> they´re saying more when they say less, and that they have an
> explanation when they have only a speculation without evidence.
> 


I'm making headway in the Hawkins book. He actually has a very interesting 
thesis and model for how the brain produces mind (consciousness) or, in his 
case, the part of mind he is interested in: conscious intelligence. I think it 
has a bearing on the behaviorist model as well as on Turing and on AI. I will 
begin posting something on it in a separate thread soon. I think it will be 
well worth discussing. 


> (Gerardo before) I´m not concluding that "it cannot" be used. We
> obviously "can", but my thinking is pragmatic: we possibly have better
> options if our purpose is empirical and technical research.
> (Stuart) I think your aim, by your own description, is to study how
> psychology (the state of minds) relates to behavior.
> (Gerardo) My aim is to study how the environment and the organism
> interact, including the role of private experiences as a very
> important part of such interaction, but not as an initiating and
> uncaused inner agency. I think that it´s too reductionistic to define
> psychology as "the state of minds": it reduces the person-in-context
> to a passive and egocentric mind separated from the world.
> 

That's not what I had in mind.


> (Stuart) I'm interested in something a bit different: to study how
> brains produce the mental including all possible mental states. This
> is not to say there aren't inputs involved but only to ask how do
> those inputs become psychological phenomena (including actions and
> dispositions to act)?
> (Gerardo) I don´t think that "brains produce the mental". The private
> experiences are not "products": they´re not things but events. 


These are word quibbles. We must get past arguing over terms if we're going to 
make real progress. Brains exist in a certain relation to minds such that you 
cannot have one without the proper kind of brain, in good working order and 
operating appropriately. Some people here don't like calling this a "causal" 
relation. I have no problem with it. Others, like you, apparently don't want to 
speak of it as "producing". Again I have no problem with it. We could, I 
suppose speak of brains engendering minds or bringing minds about or making 
minds or doing minds or, as Kirby recently has suggested, "hosting" minds 
(though the connotations of that are way off in my opinion). I frankly don't 
care what term we settle on as long as it captures what I take to be the fact 
of the matter, that minds are dependent for their being on brains or some 
equivalent physical platform.

Do you want to offer a possible term to describe this relation that we can 
hopefully agree to?      


>And the
> brain is not the "cause": it´s a necessary but not sufficient
> condition for the occurrence of those events.


It's only not sufficient when it's turned off, not working, impaired . . . that 
sort of thing. Otherwise there is absolutely no evidence and no reason to think 
that brains are not sufficient!


> By the way, if you claim
> that "the brain produces the mental", it is a kind of dualism: you
> have the brain and the mental as two separate things, and then you´ll
> have to choose between epiphenomenism or interactionism.


THAT depends what is meant by "mental". If you insist on thinking "mental" must 
mean something separate and apart from the physical universe, then it would be 
dualism. BUT THAT ISN'T WHAT I MEAN AND I HAVE BEEN PRETTY SPECIFIC ABOUT THIS.


> The
> nondualist options are either the mind-brain identity thesis (e.g.
> Place or Quine) or the ascription of mental terms to whole persons-in-
> contexts (e.g. Kantor) where the brain is a necessary but not
> sufficient condition as a participant of a wider set of conditions.
> 


This "necessary but not sufficient" stuff is nonsense when you really think 
about it. All you need is a fully functioning brain. Of course to be fully 
functioning, given current technology, you need it to be part of an extended 
system, i.e., a living body. But THAT doesn't make it necessary but not 
sufficient!  

As to mind-brain identity, that is just a way of speaking. I prefer to speak of 
mind as being an aspect of certain brain processes. Another way of putting this 
is as Marvin Minsky does, to say that mind is a "system property". Neither of 
these locutions implies that there isn't something called "mind" which is 
different than what we call "brain". They only indicate that mind is an aspect 
of certain kinds of brain behaviours.


> (Gerardo before) I´m not arguing that "mental images" are behaviors,
> but that "imagery" is behavior (it´s an operant or respondent
> occurrence of perceptual responses).
> (Stuart) I find that a stretch of the term.
> (Gerardo) It´s not a "stretch", it´s an explanatory account of (at
> least) some instances of imagery.
> 

It so broadens "behavior" as to eliminate the distinction between behavior and 
non-behavior.


> (Stuart) When I had that totally private image of my computer screen
> (I had been thinking about it as I looked at it intently shortly
> before I blacked out so it's not surprising that that was what I had
> in my mind when I started coming to. That I was on the floor, my eyes
> closed and then open after having collapsed and fallen there, is
> certainly behavior. That I thought I was still looking at my computer
> screen but could not somehow focus on it and read what was written
> there and that it faded into oblivion the more I tried to read it,
> hardly seems like behavior. It was like a dream of course since I was
> lying on my back on the ground (which was indisputably behavior).
> (Gerardo) Yes, I´d agree with you in all of this, I´m not trying to
> "reduce" your description to something else. You had an event that
> "was like a dream" or "like a visual perception". But this is just a
> description of the experiential event, and I was giving a plausible
> explanation. I guess an explanatory account should include the
> description of the target event, the actual environment, the
> organismic conditions including its physiological processes, the
> previous contingencies of learning of each stimulus and response, and
> the ethological unconditioned responses and mechanisms.
> 


The only thing important as a phenomenon was the image. The issue then is to 
explain its occurrence. Calling it "behavior" doesn't do that.


> (Stuart) I think that so broadens "behavior" as to render it
> indistinguishable from what
> others call mental events, in which case why bother?
> (Gerardo) Not all that others call "mental events". See that, using the
> classification of M1, M2 and M3, I would include M1-experiential
> events as "behavior", but would exclude M2-dispositions and M3-
> fictions: they´re not behaviors, they´re words that might be part of
> some linguistic behaviors.
> 

I have a problem with your Ms (see above).


> (Gerardo before) Understanding a sign can be understood as a
> physiological event that
> changes many dispositions of overt and covert behaviors, which not
> necessarily
> are overtly or immediately shown.
> (Stuart) It CAN but not in this case when the issue was not
> physiological events but
> mental images.
> (Gerardo) You cannot claim that there were not physiological events in
> your brain when you had such images.


I did not. I made the point that the mental image was the phenomenon that 
needed explaining. Calling it "behavior" doesn't do that. You can say that it 
was a function of certain brain behavior and mean by this certain brain 
processes. But if you're doing that you're doing just what I was already doing. 
The only difference is I would not call such an explanatory effort anything 
remotely like behaviorism! Talking about brain processes is not to speak of the 
organism's overt or even covert behaviors.


> You´re not conscious of your
> physiological events:


That's right, another aspect of our own minds that we do not have privileged 
access to!


> they´re not stimuli with which you interact, they
> ´re necessary components of each of your responses. Physiological
> events are a necessary condition of every experiential event, as far
> as we know.
> 
> (Gerardo before) Having an image can be understood as a covert
> simulation of a perceptual response.
> (Stuart) I think it makes more sense to explain it as I did above,
> i.e., that it was the last thing I had been focusing intensely on and
> I had retained the general structure of what I was seeing in my head
> (kind of an after-image) but, because our brains don't retain all the
> details (as Hawkins posits) I was unable to plug them in from memory
> and so, the more closely I thought I was looking at it, the less clear
> it became -- precisely the opposite of what we would expect to happen
> if I were really looking at it.
> (Gerardo) I don´t see much difference in your description. You´re
> saying that you had first a perceptual activity ("focusing intensely"
> in the screen) and after the blackout there was a repetition of such
> activity without the presence of the stimulus (a dreamlike event,
> "kind of an after-image"). This activity could be accounted by the
> simulation theory (the neural activity of the after-image is similar
> to the activity of the actual perception). 


Yes so far. A stimulus was present.


>Then you had an operant
> response of trying to focus your sight, and the actual perceptual
> response replaced the dream-like event.
>

Yes that I had a physiological response as well. What's left out here is the 
phenomenon of actually seeing the image, albeit imperfectly. If we have mental 
events (definitionally private to ourselves) then they need to be included in 
the general explanation and explained themselves as being part of what it means 
to be conscious (have consciousness, or whatever the latest term is that people 
here are prepared to settle on so we can advance).

 
> (Gerardo before) I´ve said that Moore (1980, 1995) divided two
> categories: (a) interoceptive and propioceptive stimulation, (b)
> covert behavior (imagery, dreaming, self-talk).
> (Suart) My blackout "dream" involved no self-talk and no narrative,
> just an image I was trying to see, an image that, of course, wasn't
> there. How is the image alone "covert"? Not only is it not shared, it
> cannot be (directly anyway).
> (Gerardo) Your experience was a repetition of previous perceptual
> responses. When a perceptual response is learned, many causes
> different than the actual presence of the stimulus may trigger it.
> Thinking of them as "images that are observed" is useful as a way of
> talk, but it´s problematic if taken literally.

But I was seeing it, albeit not with my eyes. Hawkins in his book On 
Intelligence describes an interesting experiment in which a neuroscientist 
managed to route information from a video camera through the tongue of a blind 
person, enabling  him to actually see images. The fMRI showed that the same 
area of his brain that would have been active if he had sight from his eyes had 
been activated. So the man saw through his tongue. Did he really see? Well the 
stimuli to the tongue were in the form of minute pressure applied according to 
the schema of the video picture and the images the man saw were what the camera 
was picking up. The conclusion of the experimenter was that the brain had 
itself routed the imagery to the seeing part of the neocortex (without any 
intrusive "rewiring" by the experimenters). The man then exhibited behaviors 
suggesting he was seeing.

If this man could observe images with his tongue why doubt that I can say I 
observed images in my mind? Presumably the same parts of my neocortex were 
activated.


> Images are not like
> observed objects. Observed objects remain when we don´t see them, but
> "images" only remain while we are having the imagery. Observed objects
> may be ostensively denoted, but "images" don´t.


All you're doing here is saying I couldn't have seen or observed something 
because it didn't fit the definition of what we usually mean by "seeing" and 
"observing". And yet, there is evidence that seeing and observing may require a 
broader definition if the experiment described above is true.


> There´re no literally
> "images" as internal copies of stimuli: there´s only the whole
> organismic event of "visualizing X".
> 

How do you know that? In fact Hawkins argues that that is exactly what happens, 
except that what we internalize is more akin to general structure, to patterns 
and not the complete picture in all its details.

The brain, he suggests, is a memory machine and memories consist of spatio and 
temporal patterns our brains capture and retain as what he calls invariant 
representations. These roughly correspond to what Plato (he posits) 
inaccurately thought of as Forms. He argues that there really are such forms 
but that they aren't what Plato imagined since they are built up by the brain 
rather than taken from some airy-fairy ideal realm.

They are generic templates which we use to sort and organize raw bits of data 
inputted as sensory information. It's an interesting thesis (though I'm only 
halfway through with his book thus far). Certainly he is offering a challenge 
to many here (including me) since in a certain sense he is affirming a key 
aspect of Plato's ideas in explaining the brain, something Wittgenstein clearly 
laid to rest (or so we all thought). I can't wait to see some of the responses 
on this list to that proposal!

In brief, Hawkins argues that the brain isn't a computation machine but a 
memory machine and that it works by pattern recognition, i.e., by developing 
and storing these generic templates and quickly retrieving them in everyday 
activity, activity which involves constant feedback between our sensory input 
mechanisms and our motor output mechanisms with lots of cross connections all 
the way through. He is much more precise and detailed on the way the brain 
might actually work than Edelman was -- and much clearer. His view is that the 
brain is much simpler than we imagine, contra Edelman's thesis of necessary 
complexity. On Hawkins' view, every part of the neocortex essentially 
implements the same algorithm, albeit in different applications and at 
different levels of detail.

But like Edelman and Searle, Hawkins argues that computers aren't up to the 
challenge of replicating minds. His reason is different though. While Searle 
offers a spurious logical argument and Edelman a complex thesis about 
selectionism vs. instructionism, serendipity vs. logic, Hawkins' argument is 
that what the brain does with relative simplicity in less than a hundred steps 
would take a computer millions or billions because of the need to develop 
algorithms for everything (and the vast level of detail that would need to be 
addressed). He agrees that brains are parallel but argues that as memory 
machines brains are able to avoid the need for constant computation which even 
a massively parallel processor (as Dennett proposes) could not keep pace. He 
explains the memory function in terms of the generic templates I've already 
mentioned. (He's apparently looking to develop thinking machines that operate 
on the same principle as brains.)

Anyway, all you can say at this point is that you don't believe it's true that 
the brain works on representation. Your hypothesis runs counter to Hawkins. 
However Hawkins has an interesting comprehensive theory that seems to do a 
better job than even modified behaviorism as you've presented it.    

> (Gerardo before) Having an image can be understood as a covert
> simulation of a perceptual response. Once you´ve learned the complex
> behavior of "seeing a dog", the same activity can be triggered by
> other causes that are not the presence of a dog. There´s no dog,
> outside or inside: there´s only a covert activity that has some
> similar effects (and other different effects).
> (Suart) If you call the image an "activity" and mean anything else but
> the brain events that underlay it, then I think you are stretching the
> meaning of "activity" beyond where it can sensibly be stretched.
> (Gerardo) I don´t understand our point here.
> 


Seeing an image is not behaving. It's passive, not active. It's not to do 
anything in the usual sense of "do". It's just to have the image or thought, 
etc. Yes we can say he is thinking, meaning he's doing some thinking. But there 
the point is to describe the person's compirtment (behavior) in the world. The 
point of talking about "behavior" was to eliminate the mystery of what's going 
on inside the mind, to wipe away the phenomena of mental events in any 
explanation of the choices and actions of a given organism. But here you have 
simply extended the idea of behavior to that, too. So the distinction is lost. 
Whether you call it a mental event or a covert behavior of the mental type, 
it's the same thing, i.e., it's that same old mental event the idea of focusing 
on behavior was intended to get rid of.


> (Suart) Seeing is not the same as looking.
> (Gerardo) Of course not.
> 
> (Suart) Yes but then it isn't "behaviorism" per se but some hybrid.
> Frankly I share some of your preferences for behaviorist accounts. I
> just think they don't cover everything and that the solution lies, not
> in redefining "behavior" so they do but in broadening the picture of
> mind so that we see that it's not all "behavior."
> (Gerardo) Well, if it´s some hybrid, so what? Many good things came
> from hybridization of previous proposals. I´ve been arguing that your
> criticisms don´t apply to this proposal. I think this proposal covers
> everything that must cover as explananda: publicly and privately
> observed events. It includes M1, M2 and M3 as explananda, but only M1
> can take part of the explanation, and never as uncaused agency. It
> also includes physiological events as an important mereological
> component of the explanation, but never as "the cause of mind and
> behavior".
> 


I'm suggesting that expanding the idea of behaviorism in this way either walks 
away from any real behaviorism or just muddies things up.


> (Stuart) Quine's approach as you've defined it strikes me as yet
> another effort at redefinition. However, these terms (mental terms)
> occupy an unusual place in our language game so it seems we are always
> busy trying to get our hands around these particular greased pigs.
> (Gerardo) There´s nothing wrong with redefinitions. The issue is how
> useful results the redefinition for each speaker´s purposes.
> 


Yes and I've explained why I think this attempt is problematic.


> (Stuart) Well as I've repeatedly said, I am not arguing against
> behaviorism. I don't know enough about it. I am arguing against the
> view you say no one actually ever held (and I've also expressed my
> opinion, numerous times, that I'm inclined to agree that no
> respectable thinker ever held such a view). My point is that
> Wittgenstein certainly cannot be enlisted in the class of simplistic
> behaviorists and even expressed his doubts about behaviorism as I
> recall -- but I don't know if he thought of it in the simplistic way
> we have both agreed is mistaken or not.
> However, I do think that some of the moves you have made to render
> behaviorism more sophisticated strike me as a bit weak, i.e., why
> redefine behavior to include things not typically understood as
> behavior? Wouldn't it make more sense to simply broaden your theory to
> include behaviors and other stuff (like mental images, having
> realizations, etc.)
> (Gerardo) Why should we take as a given that they are "other stuff"?


Because they cannot be satisfactorily explained as "behavior' without severely 
bending and twisting the term "behavior" in a way that undermines its original 
meaning. 


> Couldn´t we propose a different conceptualization? Should we take
> something for granted just because many people considered it to be
> truth? I think that when you put aside the inertia of thinking they´re
> other stuff because we´ve always thought they´re so, the rival
> conceptualization is not weak at all: it explains much more, and it
> avoids the many problems of the traditional "other stuff"
> conceptualization.
> 


I think it fails to explain the phenomena I have been referencing here. Just 
calling these phenomena "behavior" doesn't make them that. 


> (Stuart) My point is that the broader thesis you have been sketching
> out here, and arguing for, is more stipulative than empirical since it
> involves stipulating new meanings to terms.
> (Gerardo) No, it´s not. Once you compare the two conceptualizations,
> there´re differences in what we can and cannot do with each of them.
> For example, when you weaken the dichotomy between public and private,
> you can take the learning mechanisms that have been studied with
> public behaviors and apply them to the explanation, prediction and
> control of private events (which is a valuable purpose, both for
> empirical and technical research).


I see some value here as strategies for studying behavior and its influences 
but none at all for explaining how brains make minds.


> And when you strengthen the
> distinction between M1 and M2/M3, you can avoid "Throwing the baby out
> with the bath water" (like logical behaviorism did) and also you can
> avoid "keeping the bath water for fear of trowing out the baby" (as
> speculative cognitivists do).
> 

I have some problems with your three M's. See above. 



> (Stuart) While we can always redefine our terms, sometimes
> redefinition can go too far. I genuinely see the effort to call mental
> images behavior as just such a mistake, whether anyone ever held a
> more restrictivist theory of behaviorism or not.
> (Gerardo) I repeat: it´s not "mental images", it´s the activity of
> imagery what I´m considering as behavior (in the sense that it´s event-
> like activity, and in the sense that it´s accounted with the same
> principles that apply to other perceptual responses).


In that case you are still leaving out the images themselves. They are 
certainly real enough, if only fleeting and, finally, private.

> The
> nominalization of "mental images" may lead to mistake them with
> observed objects, when the grammar of talking about imagery and
> talking about observation has some important differences.
> 
> Regards,
> Gerardo.
>

Perhaps. Still I think it makes eminently more sense to speak of my "mental 
image" of the computer screen I saw in that incident than my behavior of seeing 
an image. What did I see, after all? If it was real, how did it get there? If 
it wasn't, why do I think I saw it? What would it mean to describe it as 
unreal? Well it wasn't the actual computer screen. Still, there it was in my 
"mind's eye".

SWM 

Other related posts: