[C] [Wittrs] Digest Number 94

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 3 Jan 2010 10:37:03 -0000

Title: WittrsAMR

Messages In This Digest (15 Messages)

Messages

1a.

Re: [C] On Time

Posted by: "Cayuse" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 3:47 am (PST)



----- Original Message -----
From: Anna Boncompagni
To: wittrsamr@freelists.org
Sent: Friday, January 01, 2010 11:17 PM
Subject: [Wittrs] Re: [C] On Time

> Time is not a ?something?, we cannot speak of time as we speak of something. More correctly:
> we do often speak of time as we speak of something, in our ordinary language, and there?s nothing wrong with it.
> But if we are talking consciously directing our attention on language, then, if we speak of time as it were a something,
> we fail to catch it, because we don?t realize that we are using a metaphor.

Yes!

> We don?t experience time. Our experience depends on time. Time is not part of the world.

The experienced world is in continual flux -- we say that it "changes",
but then we stand in danger of falling into a similar trap and reifying "change".
We contrast movement and rest, as though they were different "things".
Physics abstracts invariants because they usefully permit prediction,
and it seems to me that it plays a similar game with the concept of time,
treating it as though it were the static medium within which events have their being.
But how can we abstract an invariant from the fact of the continual flux of the experienced world?
To do so is to fossilize the world -- to render it dead -- however useful physics might find that picture.
(I think this is what Bergson was getting at.)
The mathematical models of physics end up with a "block universe" in which all change has been eradicated,
and we are then left with the pseudo-question of how change is experienced within such a "block universe".

Now then, I mention this because I think a similar thing happens with our concept of "experience" or "consciousness".
Our objective models give us a picture of the universe in which all conscious experience has been eradicated, and we are then left
with the pseudo-question of how consciousness might arise within a universe that was (according to the model) initially devoid of it.
The fact is that the experienced world was never "initially devoid of consciousness" because that world appears ONLY as the data of
consciousness (or to put it another way, the word "consciousness" is being used to allude to the fact of the very existence of that data).
Our objective models are just more of the data of consciousness, and in taking them to be ontologically more fundamental
than the world of experience in which they appear, we put the cart before the horse.

Perhaps it would be well to put it in the same terms that you speak of time...

Consciousness is not a ?something?, we cannot speak of consciousness as we speak of something. More correctly:
we do often speak of consciousness as we speak of something, in our ordinary language, and there?s nothing wrong with it.
But if we are talking [consciously?] directing our attention on language, then, if we speak of consciousness as it were a something,
we fail to catch it, because we don?t realize that we are using a metaphor.

> Do you think that this characterization of time is somehow kantian? I feel strong analogies with Kant in here.
> But analogies end when W. explains to us how problems arise ? e.i. when we use language looking at it,
> when we first make a sentence and then look at it and see time as an object.
> I can find no awareness of the mistakes of philosophical language, in Kant.

Kant remains an enigma to me.
1b.

Re: [C] On Time

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 6:40 am (PST)




AB,

Wonderful to see you again!

I find we "speak the same language", which is ironic, considering...

> Time is not a "something", we cannot speak of time as we speak of something.

Can't we?

Don't we?

Well, "time" occurs as a substantive in many of our sentences. But is that the same thing?

We might say: where we say things like, "Time passes when you're having fun," surely we really mean, "Events seem to transpire more quickly..."

But does the latter really capture the former?

> More correctly: we do often speak of time as we speak of something, in our
> ordinary language, and there's nothing wrong with it.

Quite so.

But if we are talking
> consciously directing our attention on language, then, if we speak of time
> as it were a something, we fail to catch it,

Do you mean that we fail to catch time or that we fail to catch ourselves speaking in that way? Or both?

because we don't realize that
> we are using a metaphor.
>

Or better: a picture. A metaphor says that this is (like) that, with the "like" suppressed. A picture may guide our usage without involving any comparison.

(That's not quite right either. Our usage may rest so heavily on the picture that without it, there isn't anything with which we could make a comparison.)

> We don't experience time.

Don't we?

Watching, waiting impatiently for the phone to ring?

Performing a piece of music. Listening to it. Recognizing the regularity of the musical pulse, being aware of the time between each pulse, am I experiencing only the pulses and the silences? And not the durations of the silences?

Our experience depends on time. Time is not part
> of the world.

hmmm...

Since language speaks about facts and time is not a fact, but
> more like a condition for facts,

My laptop is not a fact. That I am using it to type this response is.

But I don't think you mean anything like that.

language can't speak about it.

Can't it?

Doesn't it?

Or rather, don't we, using language?

So, we can
> talk of logs coming to an end, not of time coming to an end.
>
> Do you think that this characterization of time is somehow kantian?

Indeed, my nose tells me it is. A more thorough unpacking of what you're saying compared with some specific Kantian theses might be interesting. But off hand I'd say: yes, there's definitely a family resemblance.

I feel
> strong analogies with Kant in here. But analogies end when W. explains to us
> how problems arise ? e.i. when we use language looking at it, when we first
> make a sentence and then look at it and see time as an object.

I'd say we sometimes look at the surface grammar of the sentence, but sometimes we see time as an object because of the picture we may have been using all along, but we try to apply the picture to a queer question like, "What is time?" when that wasn't how we were using the picture previously.

I can find no
> awareness of the mistakes of philosophical language, in Kant.

I wouldn't go that far. Obviously, if Kant had had all of the insights Wittgenstein had, we wouldn't have needed Wittgenstein. But what about the Kantian doctrine of "transcendental illusion"? Certainly, here there is at least a gesture toward the direction that Wittgenstein's thought would later take.

Thank you,

JPDeMouy

http://plato.stanford.edu/entries/kant-metaphysics/#TheReaTraIll

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

1c.

Re: [C] On Time

Posted by: "Rajasekhar Goteti" rgoteti@xxxxxxxxx   rgoteti

Sat Jan 2, 2010 8:31 am (PST)



Article from Wikipedia
The Unreality of Time is the best-known philosophical work of the Cambridge idealist J. M. E. McTaggart. In the paper, first published in 1908 in Mind 17: 456-73, McTaggart argues that time is unreal.
McTaggart acknowledged that events seem to be ordered in time and
that time's passage can be understood in terms of events moving from
the future to the present to the past. He then set out to demonstrate
the unreality of time by discussing two conceptions of time:
A:One where events find their ordering in time in virtue of instantiating different temporal properties at different times and,B:One where events bear an unchanging (static) temporal relation to
all other events (e.g. if event M is earlier than event N at any time,
it will always be earlier than N.)
McTaggart set out to demonstrate that time is an illusion by first
showing that (B) alone (without A) will not guarantee the passage of
time. He then shows how (A) (and its combination with (2)) lead to
contradiction. Any attempt to avoid this contradiction leads to an
infinite regress. He concluded that time is not a real part of our
physical world.
sekhar

The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/
2.1.

Properties of Objects vs Quales of Experience

Posted by: "Joseph Polanik" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 8:06 am (PST)



J wrote:

>SWM wrote:

>1. The wetness (as physical behavior) of water being explained by the
>molecular structure of water (with no need to talk about the nervous
>system) serving as a MODEL of a KIND of explanation.

>2. The wetness (as sensation) of water being explained by the molecular
>structure of water IN CONJUNCTION WITH facts about the nervous system
>serving as an EXAMPLE of what a theory of consciousness might be
>expected to explain.

>Now, why might a philosopher present 1 in discussing consciousness?
>When it has nothing to do with consciousness?

this is a serious problem in the philosophy of consciouness: confusing
the quale of experience with the property of an object; particularly,
when the two are correlated and the same word is used for both.

'red' is used as the name of a color quale, the sensation of redness. it
is also used to designate electromagnetic radiation with a wavelength in
the 650 nm range as well as the object that reflects or radiates light
of that wavelength.

when teaching children their color words, a parent will point to an
object and say 'red'. in learning color words, the child may also get
the idea that the 'red' is 'in' the object; but, we expect philosophers
of consciousness to unlearn such childish ideas.

Joe

--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@

==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.2.

Re: Properties of Objects vs Quales of Experience

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 12:08 pm (PST)



JP,

I previously wrote about

> >1. The wetness (as physical behavior) of water...
> >2. The wetness (as sensation) of water...

(Before replying to your specific remarks, just want to make clear to those who may be following along that this is a bit of a segue. My reason for introducing the distinction into the discussion was to differentiate two different sorts of claims and the role that such claims might serve in these discussions. Where JPolaniak and I now go is a departure from that point, a discussion of the more general significance of that distinction.)

You wrote:

> this is a serious problem in the philosophy of consciouness:
confusing

Is it? I doubt that many professional philosophers (as opposed to people discussing such things online just for fun, the legitimacy of which I had earlier neglected) make this elementary mistake. Now, whether they question how the distinction is drawn and whether the are confused in their reasons for doing this is another question.

> the quale of experience with the property of an object; particularly,

Probably not important here, but one reason I don't speak of qualia but of sensations is that the former term seems to be bound up in some authors with the idea of a private ostensive definition. The phrase, "quale of experience" seems to be connected to this. A sensation is an experience, not a property of an experience. And that a sensation has a certain character is to say that it is vivid, memorable, intense, brief, and so forth. Where "quale" is supposed to fit in here eludes me in some of these discussions.

> when the two are correlated and the same word is used for both.

If we can speak of "correlation" here, it is a "grammatical correlation". But the game of evidence in which "correlation" often occurs is not appropriate here. It is not as if we made an empirical investigation to determine that liquids that exhibit wetting behavior on substrates also happen to give us a sensation of wetness when the substrate in question is our own dermis!

>
> 'red' is used as the name of a color quale, the sensation of redness. it
> is also used to designate electromagnetic radiation with a wavelength in
> the 650 nm range as well as the object that reflects or radiates light
> of that wavelength.
>

Yes.

> when teaching children their color words, a parent will point to an
> object and say 'red'. in learning color words, the child may also get
> the idea that the 'red' is 'in' the object;

What sort of idea is this?

The child learns that her toy fire truck is red, that this crayon is red, and so forth. She may also learn that if the paint on the fire truck is scratched, the surface beneath the paint is not red but that if she breaks the crayon in half, the inside of the crayon is red as well. I'm certain that's not the point you're making, but I'm not sure what else you might mean.

The child later learns that under certain conditions things can look red that aren't. She learns to make such distinctions.

Later still, she may learn about optics, about reflection, absorption, and so forth. She learns that the fire truck is red because its surface reflects certain frequencies of light and absorbs others.

but, we expect philosophers
> of consciousness to unlearn such childish ideas.

Must she have "unlearned" anything when she learns to say, "that looks red but in this odd light, I'm not really sure"? Must she have "unlearned" something when she studies optics? Or has she rather learned new facts and new ways of speaking?

And could she have learned this new way of speaking straightaway without learning the earlier one? (I am not asking an empirical question about early childhood development but a grammatical question about the conceptual connections between different games.)

Consider the following, from Wittgenstein's _Zettel_

415. For doesn't the game "That is probably a..." begin with disillusion? And can the first attitude of all be directed
towards a possible disillusion?
416. "So does he have to begin by being taught a false certainty?"
There isn't any question of certainty or uncertainty yet in their language-game. Remember: they are learning
to do something.
417. The language-game "What is that?"--"A chair."--is not the same as: "What do you take that for?"--"It might be a
chair."
418. To begin by teaching someone "That looks red" makes no sense. For he must say that spontaneously once he
has learnt what "red" means, i.e. has learnt the technique of using the word.
419. Any explanation has its foundation in training. (Educators ought to remember this.)
420. "It looks red to me."--"And what is red like?"--"Like this." Here the right paradigm must be pointed to.

421. When he first learns the names of colours--what is taught him? Well, he learns e.g. to call out "red" on seeing
something red.--But is that the right description; or ought it to have gone: "He learns to call 'red' what we too call
'red'"?--Both descriptions are right.
What differentiates this from the language-game "How does it strike you?"?
But someone might be taught colour-vocabulary by being made to look at white objects through coloured
spectacles. What I teach him however must be a capacity. So he can now bring something red at an order; or
arrange objects according to colour. But then what is something red?
422. Why doesn't one teach a child the language-game "It looks red to me" from the first? Because it is not yet able
to understand the rather fine distinction between seeming and being?
423. The red visual impression is a new concept.
424. The language-game that we teach him then is: "It looks to me..., it looks to you..." In the first language-game a
person does not occur as perceiving subject.
425. You give the language game a new joint. Which does not mean, however, that now it is always used.
426. The inward glance at the sensation--what connexion is this supposed to set up between words and sensation;
and what purpose is served by this connexion? Was I taught that when I learned to use this sentence, to think this
thought? (Thinking it really was something I had to learn.)
This is indeed something further that we learn, namely to turn our attention on to things and on to
sensations. We learn to observe and to describe observations. But how am I taught this; how is my 'inner activity'
checked in this case? How will it be judged whether I really have paid attention?
427. "The chair is the same whether I am looking at it or not"--that need not have been true. People are often
embarrassed when one looks at them. "The chair goes on existing, whether I look at it or not." This might be treated
as an empirical proposition or it might be that we took it as a grammatical one. But it is also possible in this
connexion simply to think of the conceptual difference between sense-impression and object [Objekt].

JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.3.

Re: Properties of Objects vs Quales of Experience

Posted by: "void" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 5:43 pm (PST)




>
> when teaching children their color words, a parent will point to an
> object and say 'red'. in learning color words, the child may also get
> the idea that the 'red' is 'in' the object; but, we expect philosophers
> of consciousness to unlearn such childish ideas.
>
> Joe
>
> Dear Joseph
You are right as saying goes,learn to unlearn so that there is true learning.

thank you
sekhar
> --
>
> Nothing Unreal is Self-Aware
>
> @^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
> http://what-am-i.net
> @^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
>
>
> ==========================================
>
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.1.

Re: Consciousness and Quantum Mechanics

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 12:14 pm (PST)





--- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@...> wrote:
>
> I just responded to your other comments but may have lost them. I don't know if I want to plow through all that stuff again since my responses were extensive. If that reply doesn't show up I'll make a decision later on as to whether to try to recap.
>
> In the meantime, I read your assertions that you believe I do not understand Searle's Chinese Room Argument and that you think there is evidence for this on "GoogleGroups". As I said to you in THAT response, feel free to cite specifics with links and cut-and-pastes as needed. Just making allusions to what others have claimed, to things you have alleged you have read, etc., without specifics is not to make an argument.
>
> I especially noted your statement that I do not understand what Searle means by "strong AI". I suggest you present evidence for that claim at which point I shall be glad to address it. Aside from that, I am not overly interested in unsubstantiated allusions. I expect you can do better than that.
>
> When you post your evidence for your claims with regard to my position on the CRA I suggest a separate thread be started, clearly labeled so others on this list who would probably prefer to avoid this rehash can do so without being inadvertently inconvenienced.

Hi Stuart,

I'll repost that to which you haven't responded. Simply clarify where I might be misreading you or Searle. I'm quite confident that I have Searle perfectly understood. The thesis of strong AI as Searle has it is that a properly programmed computer may have consciousness (or semantics without consciousness..) IN VIRTUE OF THE PROGRAM ALONE, never mind the hardware. Now once you mind the hardware you must either be thinking of more computation as in more robust hardware for more robust computation or are thinking of hardware in noncomputational terms as in brute physics (which Searle does not argue against given his biological naturalism which bottoms out in brute physics. I am operating from what I feel to be good memory of your past gambits. Here is the post from here that you didn't get to yet, starting with a quote:

Stuart writes:

"These approaches [strong AI AS WELL AS weak AI as well as AI in general?--Budd]
all have in common the supposition that the brain is a type of
machine, albeit an organic one, and that it is such "machine" operations that
are responsible for what we recognize as consciousness (including its many
features)."

Searle points out that strong AI is incoherent because it is not machine enough.
Peter D. Jones (at Analytic) expresses the point quite well when saying that
biological systems have no softweare/hardware separability as strong AI systems
do. Once you have software/hardware separability, the program itself is too
abstract to count as an hypothesis as to how the brain (or any other real
machine without S/H separability, hence possible AI) causes consciousness.

Peter also correctly points out what Searle pointed out in his Scientific
American article: "Is the Brain's Mind a Computer Program?" (1990), namely,
that parallel processing is of no help because anything a parallel processesor
can do may be done on a serial computer.

Hence the waffling: Stuart conflates parallel processing with what the brain
does. Given that Searle is arguing also against parallel processing, then,
given Stuart's conflation, it is understandable why Stuart would find Searle
harboring some sort of dualism when in fact he doesn't. The conflation of
computational processes with physics is the root reason for Stuart's critique of
Searle. It is also the main critique of a Searlean against parallel processing
as an improvement on serial computing--there ain't no intrinsic difference and
one ought not to conflate S/H and nonS/H systems if one is to be offering
coherent comments about the whole issue of strong AI, weak, AI and AI, AI being
possible for Searle and weak AI being useful for Searle, but strong AI being
incoherent for Searle.

The only way strong AI seems coherent is if one conflates computation (or
information processing) and physics--something Stuart does and Searle does not.

Stuart wants it both ways. He wants to say with Searle that brains cause
consciousness but doesn't want to follow Searle when Searle notes that strong AI
is too abstract and amounts to a form of dualism.

Cheers,
Budd


>
> --- In Wittrs@yahoogroups.com, "J" <wittrsamr@> wrote:
> >
> > SWM,
> >
> > > Well perhaps you're just much smarter than I am, J.
> >
> >
> > More likely I merely read more attentively. I always get the feeling that you don't take your time. I do that too, only in different ways. When I am irritated, I tend to express myself in needlessly contentious ways when I would be better off pausing and waiting to send a reply.
> >
> > It seems (I may be wrong) that you reply as you read - going line by line -
>
>
> Sometimes I do and sometimes I don't. It depends on the day's dynamics. Either way, I generally go back and re-read before hitting send and make any changes that new information below requires to what I have already written up top.
>
>
> > and in so doing sometimes miss the connection between individual remarks and the larger message.
>
>
> In the case of Joe and the Quantum connection I wasn't following along closely until Joe made the claim that von Neumann's thesis undermines the Dennettian model at which point I began paying attention. My interest was in what von Neumann had to say and how it might be interpeted contra Dennett, that's all. When I realized that Joe was tweaking the argument to get his "metaphenomenal" take on it, it became less interesting but something I will still address if Joe wants to continue. He just has to answer the questions I posed.
>
>
> > It even seems that in the course of replying to one line, you sometimes forget a point made a few lines before and so lose things that might be obvious to someone who had read the message from beginning to end before replying.
> >
>
> I'm sure that happens. When I'm doing more than one thing at a time it's certainly possible. But as noted above, the reason I did not immediately pick up on the 1,2,3 vs. I, II, III dichotomy is I wasn't following closely re: the former and was mainly interested in von Neumann's thesis as reflected in the latter.
>
> > it is not von Neumann's argument at all but Joe
> > > Polanik's argument, which no longer has the veneer of von
> > > Neumann's authority, but, in fact, involves a departure from
> > > von Neumann's claims.
> >
> > I've stated my views on JPolanik's reading of von Neumann elsewhere. But for the record, you shouldn't be unduly impressed by the authority of von Neumann or anyone else in matters like this.
>
>
> For the record it had nothing to do with being impressed by von Neumann. Joe asserted that von Neumann had made a claim recognized as sound by at least some physicists which, if true, undermined Dennett's claim about consciousness. When I asked about that, he alluded to the role of consciousness in "collapsing the wave function" and I was interested to see how that might affect Dennett's thesis. As of now, I have concluded there isn't much there whatever von Neumann's actual position, but I am still open to seeing more from Joe.
>
>
> > Or rather: whether or not von Neumann said thus and such, you should also consider whether other experts agree on the particular point at issue.
>
>
> Here is a clear case of your misreading me, the very thing you've accused me of doing! Joe's point did not, on my view, hinge on whether von Neumann's thesis was accepted as gospel by all or some physicists but only if whether, if true, it really did undermine Dennett's model. As noted, from what I've seen so far it doesn't.
>
> Anyway, given the tenor of your comments to me elsewhere concerning your opinion of my understanding of Searle's CRA (as expressed in discussion with Ron Allen or anyone else), I now expect you to cite some actual passages and provide the URLs we can link to so we can see the context and then let's see who has it right on the CRA.
>
> By the way, while impugning my view of the CRA without specifying just what you are impugning about it, you have yet to present us with your interpretation and position on it. Perhaps that would be helpful in this context, too. Please do proceed with this as I never appreciate the kind of sniping you evidenced in that other post, making allegations without anything to back them up but allusions and innuendo.
>
> Thank you.
>
> SWM
>
> > Clearly, in physics and mathematics, von Neumann is an expert and one disagrees with him at one's peril, but "expertise" in the INTERPRETATION of quantum mechanics is another matter. There are so many competing views held by people who ARE experts on the experimental and mathematical side (and who are largely in AGREEMENT on THOSE issues) that we shouldn't really speak of "authority" here.
> >
> > JPDeMouy
> >
> > =========================================
> > Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
> >
>
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.2.

Re: Consciousness and Quantum Mechanics

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 1:54 pm (PST)





--- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@...> wrote:
>
> Joe, I don't think we have too much difference about what's physical based on what you've said though I'm certainly not in the quale camp. I think, though, that the really important differences in our views came out in that last post you did and which I have already responded to. So we don't need to pay too much attention to what I take to be differences that aren't material to the question of the challenge posed to Dennett's model by your interpretation of von Neumann's quantum theory claim. So let's focus on what came out in that other post then, i.e., whether there is an argument in your interpretation of von Neumann for an extra-physical feature of what we mean by consciousness or whether it is just a matter of our working with different assumptions, reflecting a different conceptualization on each of our parts. If it is this latter, then there is probably no way to argue it as it will just be a matter of how we each see (as in "understand") the referent of the word "consciousness". -- SWM

Hi Stuart and Joe,

I'm going to riff a little below. Let me know if my jazz gets a bit to improvizational to follow. I promise not to invent any impossible time signatures while riffing away!

You (Stuart) say you're not in the quale camp (as Joe presumably is). I understand perfectly why you'd want to say this. I also understand that Searle is not in the quale camp either if by "quales" is meant entities (and even Joe may not mean entities by qualia either!).

Searle remarks that discussions of quale are often simply confused. He points this out even in reference to the great Francis Crick in his review of _The Astonishing Hypothesis_.

The only daggummet quale is consciousness per se. Since it is field-like, one can shift focus from the feel of the shirt on one's back to the aftertaste of cunni... you get the point. Dennett is right to deny qualia if he denies consciousness. Searle busts him for an eliminativism which is part of a program for denying that the "hard problem" is a good scientific problem with which scientists maybe ought to attempt to unravel.

I saw earlier that Stuart wanted to assume simply that if there were a solution to the hard problem, it would have to be dualistic. I can see why he would say this, following Dennett. I would submit that when Dennett is "explaining" consciousness in _Consciousness Explained_, he is merely doing a Wittgensteinian dissolution. For some philosophers that is as good as it gets. Not so for Searle.

Here's how Searle sees it. The study of how the brain causes/realizes consciousness (being the only quale in town, the "rest" being figments it makes sense to talk about even if not entities in the way of direct perception of real-world objects) need not (better not!) suppose consciousness to be epiphenomenal from the start. That would be a priori hubris and no one really wants any of that.

Searle concedes that we have to leave it empirically open whether consciousness is epiphenomenal but notes that it is kind of awkward, say, to write a book such that it gets written despite consciousness playing no role in its production.

So the chase. Perhaps the study of how the brain causes/realizes consciousness is akin to the discovery of the germ theory of disease. We find correlations first, causes later. There simply must be a mechanics of how the brain does it since we know independently that the brain allows for falling asleep and waking from such. How? That's a matter for science and, good Wittgensteinian Searle is, Searle "dissolves" the _philosophical_ mind-body problem only to (as Austin's phrase has it) "kick it upstairs to science."

Caveat. There is a good sense in which once we have correlations (say, the neurobiological correlates of consciousness, NCC's for short), there is forever going to be a gap between these correlations and the real mechanics. Walter at Analytic parsed it (but claimed not remembering to have) as a position that will always have a flier attached, whatever he meant by flier. I assumed he just meant that there will always be a gap between the NCC's and, for all we'll ever (ever ever?) know, the real mechanics.

In _Freedom and Neurobiology_, to connect with a recent thread in this group, Searle notes that he is an incompatibilist when it comes to freedom. He points out that if there are no gaps at the bottom level of explanation, and the bottom level causally explains the higher system feature (it being causally reducible) of brains being conscious, then freedom is an illusion.

One way that freedom is not an illusion is if there are gaps in the bottom level of "causation"/explanation. QM fits the bill for gaps at the bottom level such that that type of explanation would be compatible with free will at the system level where consciousness is explained (and not merely explained away as in Dennett--sorry behaviorists, but you are well done and cooked, but we see why you've overbaked your bread for so many years). Why, I pointed out above why the hard proble really may seem to some to involve a dualistic solution if there is one.

Not so, says Searle. Here are some options:

1. Brain causes consciousness in a mechanistic way having nothing to do with the gappiness of QM explanation/"causation." Consequence: No free will.

2. Brain causes consciousness in a QM-style explanation/"way." Free will is not contra-indicated.

Concluding remark:

1. really is no threat to human power even if it is true that there is no such "thing" as free will. The fact that one is part of a vast power play of forces is consistent with any common sense freedom ever thought to be worth having in the first place.

Philosophy is easy. The hard problem is called such for a reason.

Some say the reason is that only a solution (dualism) is possible if we deny other commitments (physicalism). (Dennett and others, say)

Others, like Searle, say that the hard problem need not involve miracles for a solution.

And still others see that any possible solution to the hard problem will involve a gap between the correlations and causation.

I suppose a gap will always be a possibility since the mechanics are going to be inductively arrived at.

Say one gets a good group of lucid dreamers to perform protocols like moving eyeballs up and down upon becoming lucid. It is (as it was, Cf. LaBerg) the case that there is simply way too high a correlation between the objective data and the lucid dreaming performances for dismissing the correlations as random.

Now, for a complication, say that we get really good inductive evidence (super-high correlation) for exactly when lucid dreamers become lucid. And then assume that in some cases the lucid dreamers don't remember (as they normally do remember) having a lucid dream when the evidence inductively shows the contrary.

Here we have possible cases where we might be justified in saying to someone: You might not remember it, but our data suggest you had a lucid dream regardless.

It may be possible to admit these sorts of cases even though we are most fond of pointing out that being conscious is something normally considered to be unfalsifiable by recalcitrant experiences given that, well, they are experiences too.

I hope the Coda wasn't too long!

Cheers,
Budd

>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.3.

Re: Consciousness and Quantum Mechanics

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Sat Jan 2, 2010 5:57 pm (PST)



Bud:

Could you do me a favor from now on and delete the portion of the other person's message that you don't need? This message below (which I have only referenced) has a large segment below your signature that should be cut. We have a block-quoting rule in the system, which was apparently escaped by the way your mail client sent the mail. I stress the need to remove as much as possible so the message board isn't such a hassle to read. The policy is 25 lines per your thought, so it should be pretty easy to follow. The real point is to delete below your signature. After your last comment, get rid of the stuff below.

Regards and thanks. 

Dr. Sean Wilson, Esq.
Assistant Professor
Wright State University
Personal Website: http://seanwilson.org
SSRN papers: http://ssrn.com/author=596860
Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

 
----- Original Message ----
From: gabuddabout <gabuddabout@yahoo.com>
To: wittrsamr@freelists.org
Sent: Sat, January 2, 2010 3:14:35 PM
Subject: [Wittrs] Re: Consciousness and Quantum Mechanics

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.4.

Re: Consciousness and Quantum Mechanics

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 7:20 pm (PST)



--- In Wittrs@yahoogroups.com, Joseph Polanik <jPolanik@...> wrote:
>
> SWM wrote:
>
> >Joseph Polanik wrote:
>
> >>I undertook to show that your mechanistic, Dennett-based theory of
> >>consciousness can't possibly be true unless von Neumann is wrong.
>
> >Yes, you did. On that score I would say you haven't yet made the case.
>
> how absurdly ironic this conversation has become!
>

Okay, Joe, I see this is as pointless with you as with some others. My mistake to have read it otherwise!

> for years now, you have been claiming that this or that person who
> disagreed with you had latent dualistic tendencies; indeed, in another
> recent post you accuse Bruce of 'implicit dualism'.
>

I have said that holding a certain conception of consciousness implied dualism. If you think, as you apparently do, that when we speak of consciousness we mean phenomena plus a perceiving subject that exists apart from everything else (including those phenomena) then that is either dualist or idealist. (It will depend on whether you go on to think that everything else is real, in some ontological sense of "real", or just in the mind, of course.)

Apparently a lot of folks freak out over being linked with anything dualist. Well first, it's only a word and second it could even be true, even if I happen to think there is no reason to suppose it is. Insofar as you are making the case for a "cogito" like argument, which you have indicated you were before, then that is certainly in keeping with dualism at the least, and maybe even some form of idealism.

If people can't talk rationally and reasonably about this, I don't want to be bothered anymore. I stayed off this list for the past 24 hours to see how this went. Returning and reading this message of yours, I see no improvement so I will probably just end my involvement here. Thanks for helping me with the decision.

> nevertheless, when I presented the von Neumann Interpretation of QM
> (which is as overtly dualistic as one can get without actually
> plagiarizing from Descartes scrapbook), you resist the suggestion that
> the von Neumann Interpretation is incompatible with your mechanistic,
> Dennett-based theory of consciousness.
>

What the "f" are you talking about?

As for the theory I subscribe to, it isn't Dennett-based, it is Dennett-consistent.

> >I don't see the negative implications in it for Dennett's model thus
> >far, nor do I fnd your tweaking of von Neumann's thesis to alter his
> >category II in terms of what is said to be included within it implied
> >by his thesis. By itself, von Neumann's thesis seems to have no
> >implications for Dennett's proposal as far as I can see at this point
> >while your tweaked version strikes me as an effort to shoehorn an extra
> >thesis into von Neumann's.
>
> how exactly did I alter the contents of von Neumann's division II?
>
> are you saying that I've included in division II something that von
> Neumann excluded? if so, what do you say I added?
>

You said von Neumann included everything physical whereas you changed that by redefining the physical as the phenomenal. Physical and phenomenal are not equivalents, at least not without a whole lot of work to recast the one as the other nor is it obviously true that such a recasting must be seen to succceed.

> are you saying that I've excluded from division II something that von
> Neumann included? if so, what do you say I subtracted?
>

See above.

> >It is still worth exploring, of course, but now it comes down to
> >understanding the Polanikian version which, as far as I can see, hinges
> >on a move von Neumann doesn't make and also on a presumption of
> >"metaphenomenal" phenomena.
>
> what is the move that you think I make that von Neumann doesn't make?
>

See above. (Why the "f" did you object to my noting that your 1,2,3 differs from von Neumann's I,II,III which you had initially cited if you now want to say there is no difference between them?)

> I deny presuming that there are metaphenomenal phenomena. what did I
> write that made you think otherwise?
>

You said #2 contains whatever is phenomenal and that #3 is metaphenomenal. Since the occurrence of a subject in the universe is a phenomenon in the universe (we encounter it in the universe) it is phenomenal by dint of its being a phenomenon. Therefore, as the content of #3 it is a metaphenomenal phenomenon.

I don't argue for the intelligibility of the claim. I only note that that is the proper conclusion that flows from your assertions and if they are unintelligible, that is a problem for you, not me.

This isn't philosophy anymore because you (and some others) are not thinking about the issues in any serious way or considering what others are saying. You are out to see who can prolong this through insult and nitpicking crap in an extremely juvenile manner and I decline to participate further. You can go back and play with Walter since I see he finds you amusing in his fashion.

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.5.

Re: Consciousness and Quantum Mechanics

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 8:39 pm (PST)



Budd, I am in the process of withdrawing from this list and I have certainly found you to be among the most unpleasant posters I have ever had occasion to deal with on this or any list so I am reluctant to re-engage you. But just as I pointed out to Mr. J. that I deserved the courtesy of hearing the specifics of his allegations, which he rather grandiously declined to provide, I suppose you deserve some response, too. Of course I enter into this with you hesitantly as I am anxious to put an end to the nonsense that has recently erupted here (since your arrival, oddly enough). So I will entertain your comments below and give you something of a response, for the record, though far more detailed responses from me to you will be found in abundance on Analytic where some of your somewhat sharper fellows reside.

(Sean, feel free to kick me off this list at any time if you decide I am being harsh or offensive but as I am leaving anyway, what the hey?)


--- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@...> wrote:
>
>
>
<snip>

> Hi Stuart,
>
> I'll repost that to which you haven't responded. Simply clarify where I might be misreading you or Searle.

Just about every point, actually. To save time, though, one especially egregious example is to be found in your embrace of Peter's mistaken claim that Searle and Dennett are really on the same side because Dennett's position is not contradicted by Searle's CRA. Note that Searle thinks otherwise as he has attacked Dennett's position as being contrary to his CRA and Dennett has similarly attacked Searle's CRA. So all the huffing and puffing about whether this is about hardware or software or whether "Strong AI" is not the same as Dennett's model is hogwash. Go read either of them on the subject.

> I'm quite confident that I have Searle perfectly understood.

And you were equally "quite confident" when you gave us the wrong definitions of "strong AI" and "weak AI" way back when, too. And when you claimed that Searle would never have been so foolish as to make a syllogistic argument. Your level of confidence is not a measure of whether you are right or not, nor is it a measure in anyone's case but I'm glad you feel "quite confident". It must be a very nice feeling.

> The thesis of strong AI as Searle has it is that a properly programmed computer may have consciousness (or semantics without consciousness..)

Well you did fess up when it was pointed out to you that you had it wrong back on Analytic (and not just by me) so I'm pleased to see you have learned since then and have retained the information.

> IN VIRTUE OF THE PROGRAM ALONE, never mind the hardware.

Yes, any platform capable of running the necessary programming will do on this view. But, of course, the corollary is that a platform that lacks the requisite capacity will not.

> Now once you mind the hardware you must either be thinking of more computation as in more robust hardware for more robust computation or are thinking of hardware in noncomputational terms as in brute physics (which Searle does not argue against given his biological
> naturalism which bottoms out in brute physics.

This was one of Peter's later arguments which you picked up on when you read it. His position was that, since the system and connectionist responses involved multiple processors running many programs together (affecting one another), the addition of this "more hardware" made this a different case and not what Searle's CRA was aimed at. You may wish to recall, as I pointed out back then, that Dennett's thesis has to do with a massively parallel platform, akin to the massive parallelism he speculates is to be found in brains, and that Searle attacks it as STILL BEING A COMPUTATIONAL PLATFORM and therefore as being incapable of achieving consciousness BECAUSE his CRA concludes that nothing computational can be conscious.

My beef with Searle is that I have maintained that the CRA doesn't show that because it is rife with a number of flaws. But the pertinent point here is that Searle uses it contra Dennett and Dennett argues that his model CAN achieve consciousness contra Searle's CRA conclusion.

Thus (now do try to read closely!) more computation (as in parallel processors working together in a single overarching system IS considerd by Searle to be contrary to his conclusions. Thus the originator of the argument in question DOES NOT SUPPORT YOUR VIEW THAT PETER IS RIGHT AND THAT MULTIPLE PROCESSING, BECAUSE IT INVOLVES MORE HARDWARE, does not contradict the conclusion of Searle's CRA.

> I am operating from what I feel to be good memory of your past gambits. Here is the post from here that you didn't get to yet, starting with a quote:
>

> Stuart writes:
>
> "These approaches [strong AI AS WELL AS weak AI as well as AI in general?--Budd]
> all have in common the supposition that the brain is a type of
> machine, albeit an organic one, and that it is such "machine" operations that
> are responsible for what we recognize as consciousness (including its many
> features)."
>
>
> Searle points out that strong AI is incoherent because it is not machine enough.

Searle points out, via the CRA, that computational processes cannot be conscious because they lack intentionality which, he presumes, must be achieved in some as yet unknown way by brains (since we know brains achieve it).

By the way, do you have quote of Searle saying that "strong AI is incoherent because it is not machine enough"? That would be most interesting since "Strong AI" isn't a term applied to a machine but to a thesis. I will assume you are attempting to paraphrase him here though. Insofar as Searle is saying that brains must be able to do things that computers can't, I suppose one could elaborate it as you have. But the issue is to do with what computational processes running on computers can do, not with how many processers are involved in the platform.

> Peter D. Jones (at Analytic) expresses the point quite well when saying that
> biological systems have no softweare/hardware separability as strong AI systems
> do. Once you have software/hardware separability, the program itself is too
> abstract to count as an hypothesis as to how the brain (or any other real
> machine without S/H separability, hence possible AI) causes consciousness.
>

This gets at another confusion which goes to Searle's later argument, that the idea that computational programs running on computers could be conscious is unintelligible since programs have neither syntax nor semantics. (Note that his original argument hinged on the premise that "syntax does not constitute and is not sufficient for semantics" where semantics are what minds have while computers have only syntax.) As Searle moved away from that argument (probably realizing the weakness of it) he introduced this later notion (see The Mystery of Consciousness) in which he asserted that computer programs were not "natural kinds" in the world because the syntax they represented were in the minds of their programmers and users. Since they were not "natural kinds" they could have no causal ability. They were mere abstractions and abstractions cause nothing. Voila, computer programs cannot cause consciousness!

The error here is in supposing that the issue has to do with some abstract notion of programs. In fact it has to do with programs running on computers. That is the machines do things, just like brains do, and if brains can produce consciousness by what they do then why shouldn't a machine do so as well? Computer programs running on computers are just so many coded instructions that cause the machine to run in certain ways. Likewise brains, as organic machines (acknowledged by Searle), make things happen including instances of consciousness.

The argument of AI researchers and people like Dennett is that one can account for what consciousness is by describing such a process-based system, whether or not contemporary computers have the capacity to run the kinds of operations at the necessary level that can replicate what a brain does.

> Peter also correctly points out what Searle pointed out in his Scientific
> American article: "Is the Brain's Mind a Computer Program?" (1990), namely,
> that parallel processing is of no help because anything a parallel processesor
> can do may be done on a serial computer.
>

As we have seen in several on-line texts we explored, there is an argument that parallel processing results in certain qualitative differences, not least the fact that simultaneous interactivity occurs. The point of the connectionist reply is that the Chinese Room is an inadequate model on which to base an argument against computationalism because it is underspecked. It is, in essence, a rote translational device, albeit of super capacity. But all it can do is translate one symbol to another. But no one in the field thinks that that is all brains do when they produce consciousness. As Dehaene points out, many things are going on simultaneously and interactively. So if Searle's CRA is an argument that superduper rote translational machines aren't conscious that's fine. But it says nothing about computational devices doing the many things brains do, including capturing, storing, associating and building with incoming data to produce various representational mapping systems that overlay one another and perform multiple functions.

If all of these things can be accomplished by a computer, then there is no reason a computational machine (a massively parallel computer of sufficient capacity running the relevant programs in the way that allows the simultaneous interfacing) cannot be conscious. Searle's CRA, because it is underspecked, says nothing about such a system. And if it doesn't, if all it tells us is that a rote translational system isn't conscious, it is a pointless argument because no one disputes that. The issue is whether consciousness is realizable on a computational platform.

> Hence the waffling: Stuart conflates parallel processing with what the brain
> does.

That's an empirical claim, Budd. Even if you could prove it wasn't true, it would not be a "conflation". And there are plenty in the field who think it is true.

> Given that Searle is arguing also against parallel processing, then,
> given Stuart's conflation, it is understandable why Stuart would find Searle
> harboring some sort of dualism when in fact he doesn't.

This continues to demonstrate how horribly confused you are on this subject. Moreover, the argument I have given that Searle is implicitly dualist, even while he denies that, hinges on a different issue, i.e., what he needs to draw a general conclusion from his CRA applicable to ANY computational platform.

He has to be able to claim that anything the constituents in the CR can't do in the CR they can't do in any other R either. And to do so he must assume that there is something about consciousness that is fundamentally different from anything computational processes can do. As I've argued elsewhere, this leads him into a problem: If brains are strictly physical, as he admits, then why should brain processes be able to do what other machine processes (remember, Searle acknowledges that brains are organic machines!) can't do?

If, as Walter put it in defending the Searlean position, it's because there are just some properties of brains that are intentional, then one has to further ask what makes these processes that? What underlies them? What makes them occur? If they are reducible to constituent processes that are not themselves intentional, then they are no different in principle than non-intentional machine processes and if the machine processes can be made to do whatever the brain processes do, then they ought to be able to produce the same result, i.e., whatever "properties" we associate with being conscious. But if these new properties are not reducible, if they just somehow emerge in brains, full blown, then they are ontologically new in the universe. If THAT is so, then we have dualism, at least two ontological basics!

So to claim that brain processes can do what other processes cannot do, is to suppose there is something that sets the brain processes apart at their most basic level, an assumption that amounts to dualism. And, in fact, Searle says in that paper we all linked to that he holds that conciousness has a first person ontology (is fundamentally subjective) even if it is causally linked to brains (describable in terms of third person observations). I have suggested that this is confused and masks the implicit dualism embedded in his CRA conclusion.

> The conflation of
> computational processes with physics is the root reason for Stuart's critique of
> Searle. It is also the main critique of a Searlean against parallel processing
> as an improvement on serial computing--there ain't no intrinsic difference and
> one ought not to conflate S/H and nonS/H systems if one is to be offering
> coherent comments about the whole issue of strong AI, weak, AI and AI, AI being
> possible for Searle and weak AI being useful for Searle, but strong AI being
> incoherent for Searle.
>

Aaagh, this is a mish-mash. Perhaps someone else here would like to try to unpack it because I see just a tangled reiteration of stuff you've already said and that I have already responded to!

> The only way strong AI seems coherent is if one conflates computation (or
> information processing) and physics--something Stuart does and Searle does not.
>

Searle mistakenly presumes that speaking of computational processes running on computers, all that is meant is computer programs (understood as listings of codes of instructions with meaning only to one who can read it). You, Budd, buy into that confusion. Note that no one in the AI field thinks that computers run without programs or that programs run without computers.

> Stuart wants it both ways. He wants to say with Searle that brains cause
> consciousness but doesn't want to follow Searle when Searle notes that strong AI
> is too abstract and amounts to a form of dualism.
>
> Cheers,
> Budd
>
>

I am unaware that Searle ever called "strong AI" dualism so if you can provide the citation, I'd be interested to read it in context. In the meantime, I think we've explored this enough. On Analytic it eventually got the list host and his allies pissed off (they didn't like my critique though it wasn't universally scorned as I recall), and led to acrimony. I plan to leave this list soon enough in hopes of avoiding such an outcome. But since J pronounced my understanding of Searle flawed without deigning to give any specifics, I felt I could not do the same with you Budd. Though I think you are at sea where Searle is concerned, I at least owed you a substantive response with my reasons.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.6.

SWM and Strong AI

Posted by: "J" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 10:55 pm (PST)



This will be short and sweet, satisfying your demands with minimal hassle for me.

SWM wrote
> I am unaware that Searle ever called "strong AI" dualism so if you can provide the citation, I'd be interested to read it in context.

In the seminal essay, the very source of the subject you've been debating for close to 6 years now, an essay the understanding of which you proclaim in the face of various critics on various message boards, viz. "Minds, Brains, and Programs", Searle wrote:

Third, this residual operationalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter. In strong AI (and in functionalism, as well) what matters are programs, and programs are independent of their realization in machines; indeed, as far as AI is concerned, the same program could be realized by an electronic machine, a Cartesian mental substance, or a Hegelian world spirit. The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical-chemical properties of actual human brains. But if you think about it a minute you can see that I should not have been surprised; for unless you accept some form of dualism, the strong AI project hasn't got a chance. The project is to reproduce and explain the mental by designing programs, but unless the mind is not only conceptually but empirically independent of the brain you couldn't carry out the project, for the program is completely independent of any realization. Unless you believe that the mind is separable from the brain both conceptually and empirically?dualism in a strong form?you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains or any other particular forms of instantiation. If mental operations consist in computational operations on formal symbols, then it follows that they have no interesting connection with the brain; the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against "dualism"; what the authors seem to be unaware of is that their position presupposes a strong version of dualism."Could a machine think?" My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.In defense of this dualism the hope is often expressed that the brain is a digital computer (early computers, by the way, were often called "electronic brains"). But that is no help. Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.3

Bon voyage,
JPDeMouy

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4.1.

Re: SWM on multiple causation and tangible effects.

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 2:11 pm (PST)



Hi Bruce,

You may have slipped below. You wrote:

"A brain -- in a certain state -- is a necessary but not sufficient
condition for a person to be conscious."

Or maybe you didn't slip at all. Maybe you would allow a particular (artificial or biological) brain working (and not in one abstract, timeless state, which may be the upshot you meant to deny?) to be both necessary and sufficient for a person to be conscious.

I would. I assume you too.

Cheers,
budd

--- In WittrsAMR@yahoogroups.com, "BruceD" <wittrsamr@...> wrote:
>
>
>
> --- In Wittrs@yahoogroups.com, "SWM" <SWMirsky@> wrote:
>
> > A process of causal signaling gets transformed
> > This becomes the sensations that you have awareness of
>
> The "You" which makes sense is where in the causal chain? I can't find it. But the everyday "You" does work. Vut the everyday"You" isn't causally connected.
>
> > The point I have been making is that one CAN account
> > for all the features of consciousness,
>
> You can account for the biological basis, but none of the "features" because none of the features are physical. I see you as attributing psychological features to brain parts that are vital for psychology.
>
> For something (Y) to be a necessary condition for X, doesn't make Y equivalent to X -- in a nutshell.
>
> > To give it up you have to shake the dualist picture
>
> Do you?
>
> > What does it mean to be aware of anything?
> > Well look at awareness in ourselves.
>
> and so on. If I didn't know you were the author, I'd assume it was a hard-core dualist.
>
> > then you need the "higher" level that deals with the first level
>
> The "higher-level" is just another _expression_ for the self. By calling it "higher", you are suggesting a continuity with the lower, biology. But the "higher", the person, operates by reason, not by causes.
>
> > You have to see how physical systems could do this sort of thing
>
> Right! I can't see it. What I mean by "physical" doesn't allow for any of the attributes attributed to a person.
>
> I prepared these notes: What is consciousness?
>
> I go with dictionary definition: the state of being aware of one's own existence,sensations, thoughts, surroundings, etc.
>
> Note: This definition makes no reference to any substance, physical or mental. C is can understood apart from the ontological question of "what exists?", or "how many basic subtances are.
>
> To continue, consciousness is consciousness of something BY SOME ONE. It is descriptive of a person. Similar to "happiness". What could be said of a person. For all descriptive terms we have criteria. But the criteria may or may not designate the cause of the state under question. Specifically, we are not clear exactly what
> brain state is necessary for C. In any event...
>
> While many conditions must hold for a person to be conscious, it is the person that is conscious, not the conditions, the brain for example. One in a vat, with the same electrical state as the person who is conscious, would not be conscious. A brain -- in a certain state -- is a necessary but not sufficient condition for a
> person to be conscious. Endorphins may be a necessary condition for a person to be happy, but endorphins alone happiness does not make.
>
> Basically, you are wanting a continuity where discontinuity prevails.
>
> bruce
>
>
>
> and computers are the best model for that, even if it turns out that they can't fully mimic everything brains do or can't do it in the right way (as suggested by Hawkins).
> >
>
> =========================================
> Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/
>

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

5.1.

Re: SWM on the extra-physical (for Bruce)

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Sat Jan 2, 2010 7:23 pm (PST)



Bruce, if you want to continue this I suggest we take it off line. I'm sure I shall never convince you and it's equally probable you won't win me over to your viewpoint but we can at least continue to try. I no longer wish to participate on these free for-all-lists where insults and foolishness hold sway however. (Apologies to Sean. You did a bang up job here, but even you cannot hold back the flood.) -- SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

6a.

Re: [C] message board record?

Posted by: "Sean Wilson" whoooo26505@xxxxxxxxx   whoooo26505

Sat Jan 2, 2010 10:38 pm (PST)





... yikes. 135 users at 9:39 Saturday night. That's a new message board record. Don't think we'll break that one for about two months.

Yours reporting.
==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Get it all here

Breaking news to

entertainment news

Yahoo! Groups

Going Green

Connect with others

who live green

Yahoo! Groups

Mental Health Zone

Mental Health

Learn More

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 94 - WittrsAMR