[C] [Wittrs] Digest Number 152

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 25 Feb 2010 11:05:19 -0000

Title: WittrsAMR

Messages In This Digest (5 Messages)

Messages

1a.

Re: Rorty's Sins

Posted by: "kirby_urner" wittrsamr@xxxxxxxxxxxxx

Wed Feb 24, 2010 9:47 am (PST)





--- In WittrsAMR@yahoogroups.com, Sean Wilson <whoooo26505@...> wrote:
>
> (Kirby)
>
> ... I'm fine with Rorty when he is pushing Wittgensteinian
ideas. He does that a lot. I'm also keen on his approach to moral
philosophy (first person perspectives). Really, when Rorty is rather
nicely cut and shaved, one might consider him only to be an expositor
of Wittgenstein's ideas -- a diciple of sorts. It's just when he's
not rather nicely cut and shaved that  he is, well, "odious."
>

That sounds OK, and a lot like me too.

One thing I appreciated about Rorty, as an undergrad, was his
willingness to take a big picture view of things. He'd zoom
way out. Like this 'Zen and the Art of Motorcycle Maintenance':
he had stuff to say about it, took the time to sample what was
popular in the culture, had some idea of where an undergrad's
head might be at. Then he'd tell the story his own way, but
at least we knew he had a working knowledge of the ambient
culture. He knew how to sync. A lot of "philosophers" live
in some amberized (time-frozen) tundra, probably canned in their
young adulthoods, never thawed nor aired since.

Another big influence was Walter Kaufmann, who was giving a lot
of overview lectures from having lived a long life, crowning it
at Princeton, with the clout to grab Jadwin, the physics hall, for
his lectures.

For the most part, WK was overtly skeptical that universities could
ever incubate / breed real philosophers. Perhaps these could only be
temporarily imported, per the Wittgenstein model?

I'm not recalling WK's specific institutional proposals, only that
he discouraged cloistering as a role model for philosophers. He
encouraged more activism ala Wittgenstein's profile.

Certainly Dr. Fuller was all over the place, with those three wrist
watches (for time zones) and those eleven PhDs (mas o meno), patents,
awards. Coxeter was more cloistered.

At the other end of the spectrum (for Dr. Kaufmann) were people like
Heidegger and Kant, whom he'd sometimes ridicule when given an
opportunity -- to the distress of some in our audience (WK was
"an asshole" according to many a whisperer -- well whaddya know).

You could set your watch by Kant's coming by the window each day,
on his morning constitutional. If a ship went down with all hands,
the great ethicist might weep, because his box of chocolates was on
that ship. I guess WK considered Kant like a Neo-Liberal or
something, whereas Heidegger was more like a Nazi shill? "Judge a
philosophy by the philosopher" was pretty much his dictum.

Not that he over-indulged in name-calling or anything, plus he'd done
a really decent amount of homework before venturing such opinions.
Again, Princeton is dedicated to giving undergrads a lot of overview,
so from my perspective, here was another great scholar doing his job
well, whether or not we ended up agreeing with every conclusion.

> The thing that set me off with Rorty is when he started claiming
Wittgenstein as the father to certain kinds of ill-formed tangents
that emerge in Rortarian thought. One is his desire
to abolish words talking about inner/outer (as being a false form of
_expression_). Another is his related confusion that "truth" was
a mistaken language construct (or something like that). The
difference between Wittgenstein and Rorty on these issues is
that Wittgenstein would never be prescribing how people should talk;
he would merely want the grammar and conditions of assertability
understood. For Wittgenstein, the idea of "truth" (as in, verified as
being outside the mind) wasn't a dogma -- at worst, it was just a
knot in certain kinds of games. For Wittgenstein, the model of logic,
truth and proof was a confusion ONLY IN PHILOSOPHY. And this is
because it was inferior to therapy and peace once the proper method
of philosophy was understood. More to the point, when "truth" meant
something informational, it was usually up to some other field to
obtain it, which should have had the effect of quieting the
philosophers (hence the peace). But the point is that telling an
INFORMATION FIELD that "truth" is a contrived way of speaking is
really transforming this idea into a dogma.
>

I'm not sure that "peace" in the sense of "always quiet" is always
the noble goal though. When the world is going to hell in a
hand basket or whatever it's doing, I think the mental picture of a
lot of cloistered philosophers "at peace" is eerie, more a scene
from some gothic horror flick.

The glass bead game is supposed to be about restoring balance or
something. Perhaps the Ivory Tower is too quick to side with Sauron
in wanting to marginalize itself to (suck up to) some higher power?

In the glory days of philosophy, she was at the pinnacle of the
quadrivium / trivium, right next to theology, quite confidant in
providing overview and perspective.

They both tumbled together eh? Or philosophy fell later, after
Hegel and Marx? Did the psychoanalytic crowd take over after that?
Did Nietzsche start what Wittgenstein completed: "the linguistic
turn" (coined by Rorty right?)? What does that mean? How shall we
tell the story going forward?

Seems now it's a humpty-dumpty mess, with over-coddled philosophers
clinging to these tired dusty toyz they call "logic" while the
rest of the world struggles with computer science, inheriting from
Leibniz, yet thrown to the wolves by these crypto-analytic types.

Seems a waste of talent.

If philosophers form a brain trust, then we need them to end wars,
not just sit in the bleachers clucking their tongues about the nature
of consciousness, spilling popcorn.

That being said, sounds like Rorty picked some lost cause to cheer
for: no distinction between inner and outer. A Zen Roshi might do
the same.

If they come to you for training, great. Some will want to sit at
your feet and be a "Rortarian" (fun pun, like a Rotary Club member).

Most will not and praise Allah for that (like, I have only the one
live-in student at the moment and that's keeping me plenty busy --
and it's not like I'm the only teacher in this picture, praise Allah
again).

> And if Rorty wants to be excessively post-modern here, that's his
business. I just don't like Wittgenstein ever being associated with a
certain kind of nonsense or dogma.  I see so many people who half
understand fragments of Wittgensteinian thought and then proceed to
make such mischief out of their passions -- finally deciding to put
Wittgenstein's name on them.   
>  

There's a: "this is what Wittgenstein thought" mode (misusing his
cloak of a authority most likely) and then there's a "this is what
I'm doing with the Wittgenstein stuff and here's why" mode, which
latter is taking responsibility.

Like what Dr. Fuller does by relating Euler to Gibbs the way he
does. He says clearly this is nothing either of them ever thought of
doing, given their own preoccupations and time lines. He's not
dodging the fact that this is his own philosophical contribution
in 1054.00.

It's good to circle one's own work for accountability purposes.
Nothing in Wittgenstein suggests we should end this practice.

There's no excuse for palming off original thinking as that of some
greater authority. More honorable is claiming originality even where
many others have had the same thoughts.

It sounds to me that by your lights, Rorty sometimes attributed
his own views to LW instead of taking responsibility for them.

I've got a triangle going with Bucky, Coxeter and Wittgenstein:
not something any could have anticipated nor agreed to probably
(they were all contemporaries for awhile), and yet if it's doing
real work in the real world, ethical implications included, then hey,
lets debate the merits.

I'm happy to raise my hand as the host of this little party. Adding
myself to the picture creates six lines of inter-relationship, and I
find it fruitful to yak about them all.

> But if you just ignore these "weeds" in, or pluck them from, the
Rortarian garden, I suppose one would find it a pleasant sort of
space.
>
> Regards 
>

A zen garden, yes.

Peace,

Kirby Urner

Affiliations:
isepp.org (board)
python.org (PSF member)
npym.org (afsc.org corp rep)

Domains:
4dsolutions.net
grunch.net

> Dr. Sean Wilson, Esq.
> Assistant Professor
> Wright State University
> Personal Website: http://seanwilson.org
> SSRN papers: http://ssrn.com/author=596860
> Discussion Group: http://seanwilson.org/wittgenstein.discussion.html

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.1.

Re: Dennett's paradigm shiftiness--Reply to Stuart

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Wed Feb 24, 2010 1:58 pm (PST)



> > Budd

> Stuart

New = Budd

Hope it's not too confusing!

--- In WittrsAMR@yahoogroups.com, "SWM" <wittrsamr@...> wrote:
>
> --- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@> wrote:
> >
> > Stuart,
> >
> > I'll comment on your claim about whether Searle is arguing against Dennett (and why I offered that on one interpretation he is not).
>
> > In the target article (BBS), Searle points out that the systems (or robot) reply changes the subject from strong AI to nonS/H systems (or a combination of S/H and nonS/H systems.
> >
>
>
>
> What Searle is doing is denying the relevance of the System Reply to his argument. Dennett responds in Consciousness Explained, among other places (and I have already transcribed that response onto this list in reply to a challenge by Joe), as to why it is relevant by arguing that the CR as a model is simply underspecked.

All parallel processing can be implemented on a serial computer. There simply is nothing more by way of computation that can be done in parallel that can't be done serially.

It is therefore a red herring to say that Searle's CR is "underspecked" in computational terms. It is underspecked, indeed, in that it is the upshot of a computational theory of mind. The Turing test is passed while the semantics ain't there--no matter how much paprallel processing goes on WHEN SAID PARALLEL PROCESSING IS ALREADY KNOWN TO BE ALSO HAD BY SERIAL COMPUTATION.

This response might be given to everything else you write below. Where something different might be given, I'll try to give it below.

>The reason Searle doesn't see this, something I have pointed out before, is because Searle is committed to a conception of consciousness as an ontological basic (an irreducible) whereas Dennett is proposing that consciousness CAN be adequately conceived as being reducible.

He not only sees the possibility you are attempting to describe, he sees it as a nonstarter because all parallel processing can be done by a serial computer (a Universal Turing Machine UTM for short). This is a repeat. Your conclusion simply can't follow.

> If it can, if we can explain subjectivity via physical processes performing certain functions, then the System Reply doesn't miss Searle's point at all! And that is Dennett's case.

The systems reply rebuttal, of course, has Searle flippantly describing "bits of paper" as a stand-in for the formal processes. Dennett and Hofstadter (_The Mind's I_--is there one?!) parlay this into a claim that his CR is underspecked but the song remains the same--parallel processing can be.... (I think this is where Peter kept giving you the option of spelling out parallel processing without resort to simply more computation--and what is left is brute causality which leads me to think that the system reply is a waffling mess because it conflates brute causality with a type of processing which is supposed to be more causally robust as in complex but turns out to be that which can alreasdy be done on a UTM/CR).

>
> Of course the two are at loggerheads. No one is denying that. But the claim you and some others have made, that Dennett and Searle are really on the same side because both agree that some kind of synthetic consciousness is possible, except not via computers, is simply wrong. Dennett is specifically talking about a computer model being conscious and Searle is specifically denying THAT possibility.

He is denying the coherence of strong AI as defined by Schank's and Abelson's 1977 (and Winograd's 1973 and Weizenbaum's 1965--"and indeed any Turing machine simulation of human mental phenomena" (Target article). To the extent that the systems reply misses the point is the extent which it may be compatible in spirit to both Searle's biological naturalism as well as his contention that he is not arguing against AI in general (just strong AI as defined by Schank and others which is spelled out in the target article.
>
>
>
>
> > The point about Dennett is that he can't have it both ways.
> >
> > The systems reply (as well as the robot reply) is motivated by strong AI or not.
> >
>
> This isn't about motivations but about the merits of the competing claims. The System Reply hinges on conceiving of consciousness in a certain way and Searle simply doesn't conceive of it in that way.

Look, this is where you are dead wrong. Searle is speaking about a specific thesis held by Schank and others and then shows that such strong AI systems may pass a TT while not having the semantics that the TT was to be a criterion for. The best a criterion can do is spell out our original intuitions anyway. Both sides intuitions are that nonconscious processes cause semantics and, say, consciousness. There is simply no way to go from Searle's seeing a flaw in functionalism/computational theory of mind to a position that denies the very spirit of the systems reply. So the systems reply is motivated by strong AI or not. That remains true along with the demerits found in the vacuity of the TT after strong AI is fleshed as the thesis it actually is. If one wants to waffle, then one is simply flirting with Searle's position under another (two) name(s). Searle's biological naturalism allows for AI and both are simply general statements that physical systems may cause and realize consciousness, whether the system be a biological one or an artificial one. Denying strong AI is not denying AI. And denying strong AI is absolutely not a denial that a physical system (like a brain or artifactual system that has at least the same causal capacities is necessary for semantics/consciousness.

You are just locating a false dilemma.

>Therefore he either doesn't see, or refuses to see, the point of the System Reply. Recall that his argument against that reply is it misses his point.

His actual response is that the man can internalize the whole system and still not understand Chinese. In the next paragraph of his response to the systems reply he mentions that he is embarrased even to give the above reply due to its implausibility. He mentions the system reply involving the claim that while, accord. to the systems reply now, the man doesn't understand Chinese, the whole system nevertheless does. Here is where Searle mentions the extra stuff besides the man's rule following being a case of "bits of paper" added to what the man is doing. The point he is making is that no amount of computation (whether in serial or parallel because all parallel processing can be done serially = UTM =CR) added to what the man understands is going to make one iota of difference.

I know, I know, it is the process of BOTH the program as well as its implementation (hardware) that is the REAL story and not just software in isolation, yada, yada. But that is to court a form of AI which is not strong AI or to court a waffling of brute physics with the computational level of description which was to be what strong AI was all about.

But you are also right to say (if you ever did) that Searle claims the system reply begs the question simply by assuming the man understands Chinese somehow. Or wait, you said he said that it misses the point. I think this is true when he goes on to explain that the systems reply may have the absurd consequence that we can no longer distinguish systems that have a mental component from those which do not. But in that case it may not have missed the point of strong AI after all--the point amounts to the idea of hylozoism since mind is defined computationally and everything under the sun can be given a computational description. For fun, Cf. Rudy Rucker's new sci-fi book _Hylozoism_, where in a funny passage Jayjay gets confused while teleporting rocks and almost teleports his head from his body!

>But if he is simply unable to conceive of consciousness in the mechanistic way proposed by Dennett then he is missing Dennett's point.

The whole point of insisting that it is the brain that causes consciousness is quite mechanistic enough! The only shot you have here is to conflate physics with computation and insist that since Searle is denying the plausibility, er, coherence of a computational theory of mind, then he has to have some nonprocess based system in mind. But note that your argument has the absurd consequence that Searle's notion of the brain causing consciousness amounts to his inability to conceive of consciousness being caused by noncomputational mechanisms. This is where I see your argument as quite bad indeed, absurd even. Recall that your other bad argument amounts to the same thing. Searle doesn't know how brains do it. He argues against strong AI. Ergo he must be a dualist of sorts.

That is aweful but explainable given your conflation of computation and physics. It occurs so frequently below that it is probably enough to end it right here. But not until I spank you just a bit more below--lighten up if you are thinking of taking offense!

>
> You may recall that I have long said here and elsewhere that in the end this is about competing conceptions of consciousness.

And I have said that you wanted it to be but I've shown that both Dennett and Searle agree that consciousness is caused by physical processes. So maybe it IS about competing conceptions of consciousness for SOME. But you can't accuse Searle of dualism when he is simply arguing that strong AI is incoherent--unless you conflate strong AI with physics. But that would be to forget about the fact that strong AI is a species of functionalism and functionalism is wedded to a level of computation that is SUPPOSED to be somewhere between the brute physical level and intentional level, if you get the history right. This is part of my contribution to the topic, by the way.

>Either consciousness is inconceivable as anything but an ontological basic or it isn't.

And who really has taught the world how to distinguish an ontological basic from a nonbasic? I'll remind you that this isn't about what is conceivable only--the thought experiment took something conceived via the TT (Turing test) and showed that the criterion wasn't good enough. That it is conceivable that physical processes cause consciousness is a thesis shared by Searle and Dennett. This nonsense about ontological basicness doesn't arise in the case of Dennett OR Searle but may be parlayed into another discussion of other proposals for how minds are what they are. You keep wanting to lump Searle with those who would talk of ontological basicness. The very idea of ontological commitment is shown by Searle to have a merely trivial application as commitment via a complete (or set of) speech act(s). Cf. _Searle's _Speech Acts_.

>If it is, then Searle is right. If it isn't, then Dennett's model is viable (and therefore Searle's blanket denial of that model is wrong).

I've found you saying that for quite a while. But both Dennett and Searle share the thesis that physical processes cause consciousness somehow. Searle may be wrong about strong AI's viability in your eyes, but you can't be unaware that Searle's reasons for thinking strong AI incoherent is because he thinks it too abstract and "not machine enough."

Now suppose you are aware of Searle's reasons for arguing against the coherence of strong AI. Then you can't lump Searle in with the "ontological basic" camp, wherever they are. Now suppose you don't know, then what gives? Can you be that myopic as to not see that Searle and Dennett are on the same page as far as physical processes causing consciousness?
>
>
> > If not, then Searle is not in disagreement--and so would not be in disagreement with Dennett if he is waffling on strong AI.
> >
>
> See above.

I've seen. Now you see?

Anyway, my God you have a unique set of pipes, Stuart!

Have a good one!

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.2.

Re: Dennett's paradigm shiftiness--Reply to Stuart

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Wed Feb 24, 2010 5:42 pm (PST)



--- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@...> wrote:

<SWM>

> New = Budd
>
> Hope it's not too confusing!
>
<snip>

> All parallel processing can be implemented on a serial computer. There simply is nothing more by way of computation that can be done in parallel that can't be done serially.
>

This misses the point again. The issue is that, if consciousness is a certain kind of process-based system, then you need to have all the parts in place, even if they all consist of different computational processes doing different things and it takes a parallel platform to do this. That one can do each of the processes in a serial way, too, isn't the issue because one can't do it all in the way that's required, i.e., by running a sufficiently complex system with lots of things interacting simultaneously, in parallel, using a serial platform. (PJ has argued that a really, really, really, really, etc., fast system could do what a parallel system could do even if we have no such system or the possibility of building one and I am agnostic on that. It may, indeed, be possible to achieve synthetic consciousness on a serial processor running at super-duper speed. But so what? The issue is what does it take to do it in the real world and, for that, parallel processors are a way more realistic option.)

If the issue were that consciousness cannot be sufficiently accounted for by describing syntactical processes at work, then introducing complexity of this type wouldn't matter, of course. But as Dennett shows, we can account for the features of mind by this kind of complexity, at least in a descriptive way (if one is prepared to give up a preconceived notion of ontological basicness re: consciousness). Whether Dennett's model is adequate for accomplishing the synthesis of a conscious entity in the real world remains an empirical question. But the point is that there is nothing in principle preventing it, as long as we can fully describe consciousness this way. So everything hinges on whether Dennett's account of consciousness as a certain agglomeration of features is credible.

To dispute Dennett you have to say his account doesn't fully describe all the features that must be present. Searle attempts this with his CRA but his attempt hinges on a conception of consciousness which requires it be irreducible (i.e., already assumes Dennett's model is mistaken at the outset) -- and yet even Searle doesn't stand by this with regard to brains, thereby putting him in self-contradiction.

> It is therefore a red herring to say that Searle's CR is "underspecked" in computational terms. It is underspecked, indeed, in that it is the upshot of a computational theory of mind.

That misses the point. If consciousness is an outcome of a certain kind of complex system, then not only is it feasible, at least theoretically, on the Dennettian model but Searle's CR manifestly fails because of the very thing Peter Brawley on the other list pointed out: you can't build a bicycle and expect it to fly. Searle's CR is not doing the things a brain does, i.e., it is not running all the complex processes that go into understanding, intending, etc. It's a rote responding device without all the processes doing all the things that are part of what it means to understand, intend, etc. It's a bicycle relative to the brain as jet plane.

> The Turing test is passed while the semantics ain't there--no matter how much paprallel processing goes on WHEN SAID PARALLEL PROCESSING IS ALREADY KNOWN TO BE ALSO HAD BY SERIAL COMPUTATION.
>

This isn't about whether the Turing Test is passed. Searle's argument presumes it is passed by fiat (even though it is questionable that such a system could do any of the things Searle stipulates it does). This is about whether a system running the kind of processes the CR runs could, if it appears to be conscious, actually be taken to be. Aside from the fact that Dennett's point is that it could not actually succeed in passing the test, let's grant that it does anyway, for argument's sake. Let's grant that it really does look from the outside as if a real mind is there. Searle says look inside and what you see is only rote processing, no understanding at all. Thus no one would agree that the CR is conscious.

As I have already pointed out, I grant he is right on that. The CR qua system is not conscious, the standard System Reply notwithstanding. But that is because the bicycle of the CR is stipulated by both sides in this argument to be flying. But it has no wings and no jet engines and no aerodynamics. All the things that would enable it to fly are missing. We simply agree that it is flying!

Well, you can do that in an argument but so what? It can have no relevance to the real world! Even a stipulated flying bike still isn't flying up there in the real clouds. And that's because it is missing key constituent parts! Well, so is Searle's CR.

Understanding involves a lot more than rote match up of symbols in a mechanical way. And the CR lacks the capacity to do the missing stuff.

So if the CR is conscious it is so only by stipulation in which case it has no real world implications.

> This response might be given to everything else you write below. Where something different might be given, I'll try to give it below.
>
>

If all you have is what you have already said, then I've given the answer already.

>
>
>
>
> >The reason Searle doesn't see this, something I have pointed out before, is because Searle is committed to a conception of consciousness as an ontological basic (an irreducible) whereas Dennett is proposing that consciousness CAN be adequately conceived as being reducible.
>
>
> He not only sees the possibility you are attempting to describe, he sees it as a nonstarter because all parallel processing can be done by a serial computer (a Universal Turing Machine UTM for short). This is a repeat. Your conclusion simply can't follow.
>
>

See above. (Note: this is not about the quality of the processes but about the type of system being run where "system" equals multiple processes doing multiple things running in parallel time.)

>
> > If it can, if we can explain subjectivity via physical processes performing certain functions, then the System Reply doesn't miss Searle's point at all! And that is Dennett's case.
>
>
> The systems reply rebuttal, of course, has Searle flippantly describing "bits of paper" as a stand-in for the formal processes. Dennett and Hofstadter (_The Mind's I_--is there one?!) parlay this into a claim that his CR is underspecked but the song remains the same--parallel processing can be.... (I think this is where Peter kept giving you the option of spelling out parallel processing without resort to simply more computation--and what is left is brute causality which leads me to think that the system reply is a waffling mess because it conflates brute causality with a type of processing which is supposed to be more causally robust as in complex but turns out to be that which can alreasdy be done on a UTM/CR).
>

This is just a repetition of your mistake of presuming this is about the quality of the processes rather than the nature of the system.

>
>
>
> >
> > Of course the two are at loggerheads. No one is denying that. But the claim you and some others have made, that Dennett and Searle are really on the same side because both agree that some kind of synthetic consciousness is possible, except not via computers, is simply wrong. Dennett is specifically talking about a computer model being conscious and Searle is specifically denying THAT possibility.
>
>
>
> He is denying the coherence of strong AI as defined by Schank's and Abelson's 1977 (and Winograd's 1973 and Weizenbaum's 1965--"and indeed any Turing machine simulation of human mental phenomena" (Target article). To the extent that the systems reply misses the point is the extent which it may be compatible in spirit to both Searle's biological naturalism as well as his contention that he is not arguing against AI in general (just strong AI as defined by Schank and others which is spelled out in the target article.
> >
> >

Searle's response misses the point, not the other way around. You miss the point as well when you fail to understand that Dennett's thesis IS the "strong AI" which Searle opposes (and which you previously called, in a weaker moment, "Dennett's strong AI').

> >
> >
> > > The point about Dennett is that he can't have it both ways.
> > >
> > > The systems reply (as well as the robot reply) is motivated by strong AI or not.
> > >
> >
> > This isn't about motivations but about the merits of the competing claims. The System Reply hinges on conceiving of consciousness in a certain way and Searle simply doesn't conceive of it in that way.
>
>
> Look, this is where you are dead wrong. Searle is speaking about a specific thesis held by Schank and others and then shows that such strong AI systems may pass a TT while not having the semantics that
> the TT was to be a criterion for.

Searle is also attacking people like Dennett as exponents of what he calls "strong AI". You have said it yourself in a weaker moment.

> The best a criterion can do is spell out our original intuitions anyway.

Who says?

> Both sides intuitions are that nonconscious processes cause semantics and, say, consciousness.

But Searle's view falls into self-contradiction when he asserts that brains do it but computers can't because computational processes aren't instances of consciousness ("nothing in the Chinese Room understands Chinese and the Chinese Room doesn't either" -- Searle). While there may well be reasons to say computational processes can't do it (Edelman and Hawkins both attempt to make the case for that), Searle has no reasons aside from the nature of the computational processes themselves (they are merely "syntax", "formal", lacking in causality, etc.). But his idea of computational processes confuses the algorithmic aspect of programs with the processes they become when implemented on the right physical platform.

> There is simply no way to go from Searle's seeing a flaw in functionalism/computational theory of mind to a position that denies the very spirit of the systems reply.

What???

> So the systems reply is motivated by strong AI or not.

This isn't about "motivations" it's about substance. The systems reply hinges on a particular way of explaining consciousness while Searle's rejection hinges on another. That difference boils down to whether consciousness is reducible or not to constituents that aren't, themselves, conscious. If they are (and Searle's assertion that brains cause consciousness suggests he thinks they are), then there is no reason, in principle, to suppose computers cannot do the same kinds of things brains do. But if they aren't, then you have to either say brains can't do it (Searle won't say that, obviously), or else brains do it by conjuring something entirely new in the universe into existence. But that is dualism and Searle denies being a dualist. So he is in self-contradiction.

> That remains true along with the demerits found in the vacuity of the TT after strong AI is fleshed as the thesis it actually is.

This is just rhetoric, not an argument.

> If one wants to waffle, then one is simply flirting with Searle's position under another (two) name(s).

This is just a reiteration of the charge you have previously made which I refuted by showing that Searle IS at odds with Dennett's thesis and that both he and even you think Dennett is arguing for AI. Additionally, I've pointed out the mistake you make when you confuse the quality of the processes in question with the kind of system in question. You can't build a bicycle and expect it to fly, etc., etc.

> Searle's biological naturalism allows for AI and both are simply general statements that physical systems may cause and realize consciousness, whether the system be a biological one or an
> artificial one.

No one is denying Searle makes such claims.

> Denying strong AI is not denying AI.

Strong AI = the thesis that whatever it is we call "consciousness" can be synthesized on a computational platform.

Weak AI = the thesis that whatever it is we call "consciousness" can be simulated/modeled on a computational platform.

Note that Dennett is talking about the first, not the second.

> And denying strong AI is absolutely not a denial that a physical system (like a brain or artifactual system that has at least the same causal capacities is necessary for semantics/consciousness.
>

I have already spelled out the contradictions inherent in Searle's CRA vis a vis brains and what they do. But note that the description you give immediately above is NOT what Searle means by "weak AI" though you once made the mistake of supposing it is!

Moreover, I have presented enough evidence here for you to see that Dennett is arguing for "strong AI" and that Searle, in opposing Dennett, thinks so, too. Enough already, don't you think?


> You are just locating a false dilemma.

>
> >Therefore he either doesn't see, or refuses to see, the point of the System Reply. Recall that his argument against that reply is it misses his point.
>
>

> His actual response is that the man can internalize the whole system and still not understand Chinese.

And that is because HIS system, the CR, is underspecked.

> In the next paragraph of his response to the systems reply he mentions that he is embarrased even to give the above reply due to its implausibility.

Who cares? What has THAT to do with the actual merits or lack thereof of his response?

> He mentions the system reply involving the claim that while, accord. to the systems reply now, the man doesn't understand Chinese, the whole system nevertheless does.

This was the mistake of the early System Reply responders. They left out the extra step of noting that the system in question must also be adequately specked and the CR simply wasn't.

> Here is where Searle mentions the extra stuff besides the man's rule following being a case of "bits of paper" added to what the man is doing. The point he is making is that no amount of computation (whether in serial or parallel because all parallel processing can be done serially = UTM =CR) added to what the man understands is going to make one iota of difference.
>

And THIS hinges on his mistake in focusing on the quality of the processes rather than the nature of the system the processes constitute.

> I know, I know, it is the process of BOTH the program as well as its implementation (hardware) that is the REAL story and not just software in isolation, yada, yada. But that is to court a form of AI which is not strong AI or to court a waffling of brute physics with the computational level of description which was to be what strong AI was all about.
>

No, you are making the same mistake you used to make, to suppose that by "weak AI", which Searle is on record as accepting, he means some form of as yet unspecified configuration of machine parts that could replicate what brains do without relying on computation primarily. THAT is NOT what he meant by "weak AI" so this is not a matter of Dennett or anyone confusing the two AI's but of Searle's mistakenly supposing that computational processes are merely abstract without causal efficacy in the world on a par with brain processes.

> But you are also right to say (if you ever did) that Searle claims the system reply begs the question simply by assuming the man understands Chinese somehow. Or wait, you said he said that it misses the point. I think this is true when he goes on to explain that the systems reply may have the absurd consequence that we can no longer distinguish systems that have a mental component from those which do not. But in that case it may not have missed the point of
> strong AI after all

Make up your mind!

--the point amounts to the idea of hylozoism since mind is defined computationally and everything under the sun can be given a
> computational description.

That is a false trail indeed! This isn't about expanding the idea of computationalism but about whether computers doing what they do can be conscious.

<snip>

>
> >But if he is simply unable to conceive of consciousness in the mechanistic way proposed by Dennett then he is missing Dennett's point.
>
>
> The whole point of insisting that it is the brain that causes
> consciousness is quite mechanistic enough!

But by doing so, Searle falls into contradiction as already noted, i.e., he says brain processes can do what computational processes running on computers can't do because computational processes running on computers aren't intrinsically conscious! So is he trying to say brain processes are? If so, from whence does that consciousness come? Does it just blink into existence in certain brains?

> The only shot you have here is to conflate physics with computation and insist that since Searle is denying the plausibility, er, coherence of a computational theory of mind, then he has to have some nonprocess based system in mind.

????

> But note that your argument has the absurd consequence

It's your argument or, better, your strawman imputed to me!

> that Searle's notion of the brain causing consciousness amounts to his inability to conceive of consciousness being caused by noncomputational mechanisms.

No, that is manifested in his argument for the consequences of the CR (i.e., the CRA).

> This is where I see your argument as quite bad indeed, absurd even. Recall that your other bad argument amounts to the same thing.

Just asserting badness is nonsense. It's just editorializing.

> Searle doesn't know how brains do it. He argues against strong AI. Ergo he must be a dualist of sorts.
>

That's not my argument as you should know by now. If you go back and read above in this very post you will see that.

> That is aweful but explainable given your conflation of computation and physics. It occurs so frequently below that it is probably enough to end it right here. But not until I spank you just a bit more below--lighten up if you are thinking of taking offense!
>

I have. I find such silly editorial comments off-putting and a waste of both our times. Talk substance and leave the personal remarks aside and we'll both be better off.

>
> >
> > You may recall that I have long said here and elsewhere that in the end this is about competing conceptions of consciousness.
>
>
> And I have said that you wanted it to be but I've shown that both Dennett and Searle agree that consciousness is caused by physical processes.

They do. But Dennett offers an explanation for how while Searle simply asserts it as his belief, while falling into contradiction between what he says about the CR and what he says about brains. Self-contradiction is a problem for a philosopher like Searle who is purporting to provide a logical picture of what can't work.

> So maybe it IS about competing conceptions of consciousness for SOME. But you can't accuse Searle of dualism when he is simply arguing that strong AI is incoherent--unless you conflate strong AI with physics.

Searle's dualism is manifested by his assumption in the CRA. without that assumption of ontological basicness for consciousness, one cannot draw the conclusion from the CRA Searle says we should draw.

> But that would be to forget about the fact that strong AI is a species of functionalism and functionalism is wedded to a level of computation that is SUPPOSED to be somewhere between the brute physical level and intentional level, if you get the history right. This is part of my contribution to the topic, by the way.
>
>

Computationalism is the thesis that minds are just certain process-based systems operating in a certain way at a certain level of complexity and that these systems are the kind computational processes can achieve.

>
>
> >Either consciousness is inconceivable as anything but an ontological basic or it isn't.
>
> And who really has taught the world how to distinguish an ontological basic from a nonbasic?

The issue isn't this as an explicit thesis but rather whether it is implicit in some theses.

> I'll remind you that this isn't about what is conceivable only

You are mistaken. It most certainly is.

--the thought experiment took something conceived via the TT (Turing test) and showed that the criterion wasn't good enough.

This isn't about whether the Turing Test is a reliable test for intelligence but about whether a system like the CR that can pass it would be considered as having the understanding we associate with human type intelligence. Recall that Searle simply stipulates that the Turing Test is passed by his CR.

> That it is conceivable that physical processes cause consciousness is a thesis shared by Searle and Dennett. This nonsense about ontological basicness doesn't arise in the case of Dennett OR Searle but may be parlayed into another discussion of other proposals for
> how minds are what they are.

It's the fundamental conceptual difference between their competing views about the possibilities of computationally based consciousness.

> You keep wanting to lump Searle with those who would talk of ontological basicness.

No one that I know of uses that terminology but me and I use it to get away from the archaic connotations of talk about substances. It's a more generic formulation, that's all.

> The very idea of ontological commitment is shown by Searle to have a merely trivial application as commitment via a complete (or set of) speech act(s). Cf. _Searle's _Speech Acts_.
>

Elaborate your point and how it is relevant here then.

>
>
> >If it is, then Searle is right. If it isn't, then Dennett's model is viable (and therefore Searle's blanket denial of that model is wrong).
>
>
> I've found you saying that for quite a while.

Well congratulations on your memory then.

> But both Dennett and Searle share the thesis that physical processes cause consciousness somehow.

See my response to this same point which you have already made above!

> Searle may be wrong about strong AI's viability in your eyes, but you can't be unaware that Searle's reasons for thinking strong AI incoherent is because he thinks it too abstract and "not machine enough."
>

I know his rhetoric. So what? Rhetoric isn't argument.

> Now suppose you are aware of Searle's reasons for arguing against the coherence of strong AI. Then you can't lump Searle in with the "ontological basic" camp, wherever they are.

It's the dualist camp and I already have for the reasons already given, numerous times.

> Now suppose you don't know, then what gives? Can you be that myopic as to not see that Searle and Dennett are on the same page as far as physical processes causing consciousness?
>

This is the third or fourth time you've made this irrelevant point in this post!

>
<snip>

>
>
> Anyway, my God you have a unique set of pipes, Stuart!
>
> Have a good one!
>
> Cheers,
> Budd
>
> =========================================

You too, Budd. I can see we will never really understand one another. This is roughly the same argument we had back on the Wisdom Forum in 2004. Nothing, or very little, seems to have changed (though I do think my argument against Searle's viewpoint and for Dennett's has become better honed with repetition and even with dealing with some ongoing challenges). I wonder, though, if discussions like this ever lead to much?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.

Philosophical method

Posted by: "void" rgoteti@xxxxxxxxx   rgoteti

Wed Feb 24, 2010 8:21 pm (PST)



My method throughout is to point out mistakes in language. I am going to use the word "philosophy" for the activity of pointing out such mistakes. Why do I wish to call our present activity philosophy, when we also call Plato's activity philosophy? Perhaps because of a certain analogy between them, or perhaps because of the continuous development of the subject. Or the new activity may take the place of the old because it removes mental discomforts the old was supposed to.

The words "thinkable" and "imaginable" have been used in comparable ways, what is imaginable being a special case of what is thinkable, e.g., a proposition and a picture. Now we can replace a visual image by a painted picture, and the picture can be described in words. Pictures and words are intertranslatable, for example, as A(5,7), B(2,3). A proposition is like, or something like, a picture. Let us limit ourselves to propositions describing the distribution of objects in a room. The distribution could be pictured in a painting. It would be sensible to say that a certain system of propositions corresponds to those painted and that other propositions do not correspond to pictures, for example,
Philosophy Archive @ marxists.org
that someone whistles. Suppose we call the imaginable what can be painted, and the thinkable only what is imaginable. This would limit the word "thinkable" to the paintable. Now of course one can extend the way of picturing, for example, to someone whistling:

This is a new way of picturing, for a "rising" note is different from a vertical rise in space. With this new way we can imagine more, i.e., think more. People who make metaphysical assertions such as "Only the present is real" pretend to make a picture, as opposed to some other picture. I deny that they have done this. But how can I prove it? I cannot say "This is not a picture of anything, it is unthinkable" unless I assume that they and I have the same limitations on picturing. If I indicate a picture which the words suggest and they agree, then I can tell them they are misled, that the imagery in which they move does not lead them to such expressions. It cannot be denied that they have made a picture, but we can say they have been misled. We can say "It makes no sense in this system, and I believe this is the system you are using'?. If they reply by introducing a new system, then I have to acquiesce.

4a.

Re: Strawson on Experience and Experiencers

Posted by: "Rajasekhar Goteti" rgoteti@xxxxxxxxx   rgoteti

Wed Feb 24, 2010 8:49 pm (PST)




The term Theta Role is often used interchangeably with the term thematic relations (particularly in mainstream generative grammar â?? for an exception see (Carnie 2006)). The reason for this is simple: theta roles typically reference thematic relations. In particular, theta roles are often referred to by the most prominent thematic relation in them. For example, a common theta role is the primary or external argument. Typically, although not always, this theta role maps to a noun phrase which bears an agent thematic relation. As such, the theta role is called the "agent" theta role. This often leads to confusion between the two notions. The two concepts, however, can be distinguished in a number of ways.
Experience r otherwise called as theta.Theta role Wikipedia 
sekhar

Your Mail works best with the New Yahoo Optimized IE8. Get it NOW! http://downloads.yahoo.com/in/internetexplorer/
Recent Activity
Visit Your Group
Yahoo! News

Get it all here

Breaking news to

entertainment news

Yahoo! Groups

Mental Health Zone

Mental Health

Learn More

Cat Groups

on Yahoo! Groups

discuss everything

related to cats.

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 152 - WittrsAMR