[C] [Wittrs] Digest Number 151

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 24 Feb 2010 11:06:01 -0000

Title: WittrsAMR

Messages In This Digest (5 Messages)

Messages

1.

'Grammatical Remarks'

Posted by: "void" rgoteti@xxxxxxxxx   rgoteti

Tue Feb 23, 2010 5:45 am (PST)



Wittgenstein called these reminders 'grammatical remarks'. And whether they take a form that we would naturally call a 'rule' or not, these remarks do the work of a rule, and so they are rules -- regardless of their form. In logic tools (i.e. signs) are defined by the jobs they are used to do.

"Look upon language as a collection of tools" is Wittgenstein's method. A method is neither true nor false -- even if it doesn't work. So Wittgenstein's sign 'Language is a collection of tools' is not a statement of fact, although that is its form.

What Wittgenstein actually said is that language is like a collection of tools. (That was his simile.) The sign 'Language is like a collection of tools' is clearly not a statement of fact. "Because anything can be compared to anything else in some way or another. That belongs to the grammar of the word 'comparison'

http://www.roangelo.net/logwitt/logwitt1.html

2.1.

Re: Dennett's paradigm shift.

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Tue Feb 23, 2010 4:01 pm (PST)



Stuart,

I'll comment on your claim about whether Searle is arguing against Dennett (and why I offered that on one interpretation he is not).

In the target article (BBS), Searle points out that the systems (or robot) reply changes the subject from strong AI to nonS/H systems (or a combination of S/H and nonS/H systems.

The point about Dennett is that he can't have it both ways.

The systems reply (as well as the robot reply) is motivated by strong AI or not.

If not, then Searle is not in disagreement--and so would not be in disagreement with Dennett if he is waffling on strong AI.

If so, then Searle has caught those offering the systems or robot reply either changing the subject (no disagreement if so) or being incoherent.

If someone manages to say that the program is purely formal and so the semantics are somewhere else (or a combination of program and nonprogram), then one has effectively removed the original motivation for strong AI as discussed quite clearly in the target article.

I still also disagree with your proposal that Searle is wrongheaded in his later critique of Strong AI being incoherent. His reason is crystal clear--no one knows what it would mean to discover if something were intrinsically computational. Computation names an abstract sort of thing.

If one bypasses this point by insisting that it is all about the combination of computation along with the physical processes used to carry the formal program, then one also has bypassed the original strong AI claim. And it still is problematic to understand just what formal processes can add to brute ones.

So Searle manages to distinguish his position as biological naturalism and insists that one (Dennett's among others) of the motivating factors of strong AI is still the idea that we can learn things about mind by studying the laws of computation without needing any information whatsoever about real brains.

But I do agree that brain science is tough. And I would disagree with your idea that Searle has to be a dualist because brain science is both tough and he is arguing against computational theories of mind.

For your argument to go through (Searle's dualism that he doesn't know is implied by his CRA and biological naturalism), you would have to waffle on strong AI. I believe you do along with all the systems and robot repliers. But if you waffle, you're really accepting something with which Searle is in agreement.

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

2.2.

Re: Dennett's paradigm shift.

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Tue Feb 23, 2010 6:22 pm (PST)



--- In Wittrs@yahoogroups.com, "gabuddabout" <wittrsamr@...> wrote:
>
> Stuart,
>
> I'll comment on your claim about whether Searle is arguing against Dennett (and why I offered that on one interpretation he is not).

> In the target article (BBS), Searle points out that the systems (or robot) reply changes the subject from strong AI to nonS/H systems (or a combination of S/H and nonS/H systems.
>

What Searle is doing is denying the relevance of the System Reply to his argument. Dennett responds in Consciousness Explained, among other places (and I have already transcribed that response onto this list in reply to a challenge by Joe), as to why it is relevant by arguing that the CR as a model is simply underspecked. The reason Searle doesn't see this, something I have pointed out before, is because Searle is committed to a conception of consciousness as an ontological basic (an irreducible) whereas Dennett is proposing that consciousness CAN be adequately conceived as being reducible. If it can, if we can explain subjectivity via physical processes performing certain functions, then the System Reply doesn't miss Searle's point at all! And that is Dennett's case.

Of course the two are at loggerheads. No one is denying that. But the claim you and some others have made, that Dennett and Searle are really on the same side because both agree that some kind of synthetic consciousness is possible, except not via computers, is simply wrong. Dennett is specifically talking about a computer model being conscious and Searle is specifically denying THAT possibility.

> The point about Dennett is that he can't have it both ways.
>
> The systems reply (as well as the robot reply) is motivated by strong AI or not.
>

This isn't about motivations but about the merits of the competing claims. The System Reply hinges on conceiving of consciousness in a certain way and Searle simply doesn't conceive of it in that way. Therefore he either doesn't see, or refuses to see, the point of the System Reply. Recall that his argument against that reply is it misses his point. But if he is simply unable to conceive of consciousness in the mechanistic way proposed by Dennett then he is missing Dennett's point.

You may recall that I have long said here and elsewhere that in the end this is about competing conceptions of consciousness. Either consciousness is inconceivable as anything but an ontological basic or it isn't. If it is, then Searle is right. If it isn't, then Dennett's model is viable (and therefore Searle's blanket denial of that model is wrong).

> If not, then Searle is not in disagreement--and so would not be in disagreement with Dennett if he is waffling on strong AI.
>

See above.

> If so, then Searle has caught those offering the systems or robot reply either changing the subject (no disagreement if so) or being incoherent.
>

Just because Searle asserts they are changing the subject doesn't mean they are, anymore than just because I assert something of you (or you assert it of me) means I am (you are) right.

> If someone manages to say that the program is purely formal and so the semantics are somewhere else (or a combination of program and nonprogram), then one has effectively removed the original motivation for strong AI as discussed quite clearly in the target article.
>

You yourself called Dennett's thesis "Dennett's strong AI" and Searle himself repeatedly argues against Dennett's position using his argument against so-called "strong AI". So these two facts are prima facie evidence, at least, that this is about Searle's concept of computationalism (what Searle has named "strong AI"). Therefore Dennett's argument contravenes Searle and vice versa.

Now if you want to take the position that this isn't about computer programs running on computers (software on the necessary physical platform that runs it), then you have a problem because Searle is very clear that he IS talking about computers, even if he often speaks of programs as abstract. If he genuinely holds a view like the one you are imputing to him, that this has nothing to do with the platform (the hardware), then you must be saying that he is only arguing against the possibility of programs being conscious. But what then is a program, once you extract from it the operations implemented by the machine in which it is installed?

NO ONE IN THE AI WORLD IS ARGUING OR EVER ARGUED THAT THE PROGRAMS QUA ALGORITHMIC INSTRUCTIONS ENCODED ON SOME TAPE OR ON A PIECE OF PAPER OR IN A PROGRAMMER'S MIND CAN BE CONSCIOUS. There must always be implementation and implementation ALWAYS implies a platform, a machine. So while computationalism implies multiple realizability (that different machines can realize the same kind of conscious system if they are running the same processes), it does NOT imply that no platform is needed or that a platform having sufficient capacity is not required to do the job.

Dennett argues that the platform must be extremely powerful and have parallel processing capabilities to do the job. Searle argues that Dennett's system still can't do it because, in the end, it's just running syntax, mechanical operations according to certain prescribed rules. But Dennett counters that one can account for all the features we associate with consciousness by a description of sufficiently complex processes of this type. ("Complexity," Dennett argues, "matters".)

But remember there is a fundamental asymmetry here in their arguments. While Searle is arguing for the impossibility of a Dennettian type of model, Dennett is arguing only for its possibility. Impossibility implies an end to the debate but possibility does not as it remains to be refined, implemented and tested on machines capable of doing what Dennett proposes needs to be done.

> I still also disagree with your proposal that Searle is wrongheaded in his later critique of Strong AI being incoherent. His reason is crystal clear--no one knows what it would mean to discover if something were intrinsically computational. Computation names an abstract sort of thing.
>

Computer processes are no more abstract than brain processes. Both classes of process are physical events occurring on a physical platform. If brain processes can produce subjectivity there is no reason, at least in principle (based on their being processes!), why other processes can not do so as well. This is the point of multiple realizability.

> If one bypasses this point by insisting that it is all about the combination of computation along with the physical processes used to carry the formal program, then one also has bypassed the original
> strong AI claim.

No, one has not, unless you think Searle's argument against computationalism is only against programs, not against computers running them! And if you do, you will be at odds with Searle himself since he is quite explicit about arguing against the possibility of computers being conscious. Indeed, to argue that he is only making the case against pure programs would be empty since no one thinks programs in isolation do anything but carry the information the machine running them will ultimately implement.

> And it still is problematic to understand just what formal processes can add to brute ones.
>

Computer programs running on computers are no longer merely "formal processes". They are real events in the real world, as real, indeed, as brain processes running in brains.

> So Searle manages to distinguish his position as biological naturalism

That's what he calls it but so what? He still offers no answers as to what is "natural" except to assert that we know brains cause consciousness. Okay, but that says nothing about whether anything else can. So long as he hazards no explanations for how they do it, which somehow computers cannot match (as people like Edelman and Hawkins attempt), then he is just naming his position, he isn't explicating it.

> and insists that one (Dennett's among others) of the motivating factors of strong AI is still the idea that we can learn things about mind by studying the laws of computation without needing any information whatsoever about real brains.
>

Notice that real world brain researchers like Stanislas Dehaene (excerpts from a recent talk he gave in Paris available on this list in some earlier posts) pay attention to what Dennett says. Dennett, for his part, is engaging in a theoretical approach that, among other things, considers what it is brains must do if they are to produce consciousness. Dennett, in fact, has been involved in actual brain research (as his Consciousness Explained documents) so it is absurd to say that he is arguing for a model of consciousness that takes no account of what brains actually do. If Searle makes THAT assertion (and I don't recall him doing so -- but I don't have a photographic memory) then he is way off base. (Recall that one of Dennett's claims is that to succeed in building an artificially conscious entity, we have to do all the things brains manifestly can do. THAT's why he argues for massively parallel processing!)

> But I do agree that brain science is tough. And I would disagree with your idea that Searle has to be a dualist because brain science is both tough and he is arguing against computational theories of mind.
>

My idea that he is an implicit dualist hinges on one thing only: That to suppose that syntax qua computational processes running on computers cannot achieve consciousness, if they are doing the right things in the right way, you have to presume that consciousness cannot be causally reduced to non-conscious constituent processes or events. Once we shake that picture and recognize that there is nothing in our own experience that isn't replicable by a physical process-based system, then there is no reason, at least in principle, that consciousness cannot also be realized on other kinds of platforms than brains.

> For your argument to go through (Searle's dualism that he doesn't know is implied by his CRA and biological naturalism), you would have to waffle on strong AI.

This is simply false but it reflects your rather odd view that Dennett is and is not arguing for "strong AI"! See above for my response to that.

> I believe you do along with all the systems and robot repliers. But if you waffle, you're really accepting something with which Searle is in agreement.
>
>
> Cheers,
> Budd
>

Then why do you think Searle doesn't just say, 'You know, Dennett's right about that. A massively parallel computational system like he describes could achieve consciousness because my CRA is ONLY about a simple rote response system such as I specked in the CR!'

If, in fact, Searle's position is as you describe it, then all that's needed is for him to agree with Dennett.

But if he doesn't (or can't, based on his already well documented arguments), then how can you continue to say that I or Dennett are "really accepting something with which Searle is in agreement"?

I'll leave you to sort this one out.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.1.

Reply to Sean on Policy, Searle and Witt.

Posted by: "gabuddabout" wittrsamr@xxxxxxxxxxxxx

Tue Feb 23, 2010 4:28 pm (PST)





--- In WittrsAMR@yahoogroups.com, Sean Wilson <whoooo26505@...> wrote:
>
> Budd:
>
> On the matter of the quoting policy, the rule is only for the benefit of the discussion board. Because of the way I have configured this group, people read it in different venues. If we were just an email group, leaving the message would be fine. But it makes it difficult for 3rd parties who stop by to read the message board.
>
> Take a look at the first 3 messages in this thread: http://seanwilson.org/forum/index.php?t=msg&th=1844&start=0&S=9559940f1b71186aaa8f207f8432cfe2
>
> Compare it now to this thread: http://seanwilson.org/forum/index.php?t=msg&th=977&start=0&S=9559940f1b71186aaa8f207f8432cfe2
>
> If you were a visitor and had clicked one of the two threads, reading the first 3 of the first would be very easy. If you think about it, books and essays and letters, etc., don't use the telephone-conversation format. It's because it makes things easier for people not participating in the discussion to read. That's the key. Are you just talking to Stuart or are you trying to leave ideas that someone else might benefit from consuming? The discussion board is concerned with the latter.

Thanks, Sean. I didn't have too much trouble with either thread, though I see what you mean. If one sees that someone signed off, me for example, one needn't bother being bothered by whatever was written below unless they wanted to. I assume that all posts are fair game for anybody; I also see that Searle's writings, for one reason or another, are not seen as excellent extensions and clarifications of the best that Wittgenstein had to offer. The difference is that Searle's books are easier to digest in terms of understanding all the points considered. I wonder if there is anything truly important in Wittgenstein that Searle never took up. That would be a learning experience for me and is partly why I tried to engage Stuart here, hoping that someone else may respond as well.

Cheers,
Budd

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

4a.

Re: Debating with Functional Programmers

Posted by: "kirby_urner" wittrsamr@xxxxxxxxxxxxx

Tue Feb 23, 2010 4:59 pm (PST)





--- In WittrsAMR@yahoogroups.com, "iro3isdx" <wittrsamr@...> wrote:
>
>
> --- In Wittrs@yahoogroups.com, "kirby_urner" <wittrsamr@> wrote:
>
>
> > In between New Math and what gets ridiculed as New New Math was the
> > rise and fall of intervening schools of thought. Constructivism,
> > constructionism... you know the ones. The Math Wars plays out daily,
> > in mostly ritualistic fashion, the positions well known.
>
> There's a "constructivism" in mathematics education, but I'm not sure
> what that is. I don't really have a problem with Bishop's
> constructivism in mathematics. It's an interesting alternative
> approach to math, though perhaps I see it as something like doing math
> with both hands tied behind your back. But when some constructivists
> go all religious about it, and argue that everything else is wrong -
> that's when I begin to see them as a bit nutty.
>

In the Math Wars, constructivism is usually associated with Piaget,
and then a set of practices wherein students are supposed to
"construct their own concepts", meaning more emphasis on active
learning, less passive receiving of "direct instruction".

The traditionalists decry constructivism as encouraging kids to waste
too much time trying to reinvent every wheel, as if centuries of
heritage could spring ab initio from the individual, as if
"understanding" addition required inventing one's own algorithm for
doing it.

I do think constructivists tend to glorify and romanticize child
prodigies quite a bit, many of them having been prodigies themselves
and still nursing grudges against the many authoritarian teachers who
only interfered with their genius. In retrospect, they want to set
things up for coming generations such that gifted students such as
themselves get more freedom to self school, even if in a classroom
context a lot of the time.

I'm sympathetic insofar as I think we have many styles of learner
out there, and "child prodigy" is one of them (really too generic a
label -- plus I'm not an early childhood development specialist,
like my neighbor Laurie Todd).

Direct instruction is not a terrible thing, especially if you're
pawing through Youtube, taking control over sampling. Catching a live
performance is great too -- lecture culture is a lot like music
culture, and in some venues, we mix them (e.g. Prairie Home
Companion).

Kirby

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Get it all here

Breaking news to

entertainment news

Yahoo! Groups

Cat Owners Group

Join a community

for cat lovers

Group Charity

Food Bank

Feeding America

in tough times

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 151 - WittrsAMR