[C] [Wittrs] Digest Number 139

  • From: WittrsAMR@xxxxxxxxxxxxxxx
  • To: WittrsAMR@xxxxxxxxxxxxxxx
  • Date: 12 Feb 2010 10:53:55 -0000

Title: WittrsAMR

Messages In This Digest (3 Messages)

Messages

1.

Re: 08 - [C] Re: Kripke's Language Game Solved

Posted by: "Rajasekhar Goteti" wittrsamr@xxxxxxxxxxxxx

Thu Feb 11, 2010 6:51 pm (PST)




JPDeMouy
Nature of prefix and suffix may be able to clear your doubt regarding  "bearer-calls" and "bearer-assignments" thank you
sekhar

--- On Fri, 12/2/10, J D <ubersicht@gmail.com> wrote:

From: J D <ubersicht@gmail.com>
Subject: 08 - [C] [Wittrs] Re: Kripke's Language Game Solved
To: wittrsamr@freelists.org
Date: Friday, 12 February, 2010, 7:44 AM

Block reason:
This message is above your Auto Block threshold

| Approve sender
| Approve domain

Your Mail works best with the New Yahoo Optimized IE8. Get it NOW! http://downloads.yahoo.com/in/internetexplorer/
2.1.

Re: Meaning, Intent and Reference (Parsing Fodor?)

Posted by: "jrstern" wittrsamr@xxxxxxxxxxxxx

Thu Feb 11, 2010 6:56 pm (PST)



--- In Wittrs@yahoogroups.com, "SWM" <SWMirsky@...> wrote:
>
> > If the world is complex, how could philosophy of the world be
> > simple?
> >
> > And, the world *is* complex.
>
> I meant straight forward as opposed to merely simple but I agree that it isn't always possible to state ideas in the simplest of terms.

Look at the Mandelbrot pattern. You state it simply, yet the result
is infinitely complex. The statement gives you none of the details.

> Edelman and Hawkins

I just don't care.

I consider all such work pathologically wrong, naive, ill-informed, atavistic. See Bennett and Hacker for substantial (but not all) details of the problem, just don't read them for the solution!

> Whether Fodor is doing that is something I haven't determined to my own satisfaction yet.

Read:
* The Language of Thought (1975)
* RePresentations (1980)
and
* LOT 2 (2008)

(you can skip his other eleven or so books)

Then - you still won't know, but in much more detail!

> > ... This is "the systems reply" writ large.
>
> Here we are in agreement though I would have (and have) expressed it differently.

The devil is in the details, of which I'm afraid there are many.

As Fodor says, and I agree, the only way to do this stuff is extremely multidisciplinary. And us naked apes aren't very good at stuff that has whole lots of free variables. It's a problem.

> This is what I was hoping for: A "computer language" of brains then? Of course the language of programming isn't the language of the computer for it must become machine language first for computers do actually implement, right? So is Fodor's language of thought the machine language while English is like COBOL say?

Yeeeah, to a first approximation, sort of.

As a matter of fact, Fodor even makes a bogus reference to compilers, somewhere or other, he mistakes the compilation process for the execution process.

You have in my "systems reply writ large" the computer hardware, a program, and the execution process/trace. I take it this three+ component set is a cannonical form, and effectively irreducible.

Compilation, from Cobol to assembler, is basically free. Fodor doesn't worry much about English as such, what he worries about are behaviors and situations and how the semantics have to work, in Cobol or assembler or LOT or English, the logical issues.

> > That *some* physical form is eventually found to correspond, is important, but that's not Fodor's department. But, just to cause us all pain, Fodor insists that this computer language works only because and when it corresponds in some dual-aspect manner also to innate and preexisting concepts, that represent (eg, mirror) the world.
>
> This gives me some further trouble (as you suspected it would). But it does sound like he's saying something like Sean is getting at with his "brain scripts".

Yes, but Fodor is all about LOT, and Sean, good Wittgensteinian, is not! Now, this is a problem, but not irremediable. Scripts without language? Or, what language is legal, for writing scripts? And aren't scripts, rules of sort? I think there are solutions to all of these, but they are going to move away from W-classic.

But really, no, Fodor doesn't go for the script idea as such, Fodor does concepts and modules and genericity. I'd add scripts to that list, actually, so I have some sporting interest in seeing where Sean gets with his things.

> > Now, clearly, if you HAVE something that represents and mirrors the world, that would be handy.
>
> How does he think this happens? Presumably the idea of "representing" and "mirroring" is not intended as we might use the terms for the conscious aspect of our minds (i.e., that we are aware of representing and mirroring when we are doing these things). Presumably he thinks there is a tacit, non-conscious one-to-one relation between world object and thought object in the language of thought then?

Basically, yes.

And hardly anybody likes that, but his argument is that his system is only clarifying what everybody typically does with "propositional attitudes", and so it's very tricky (for fans of propositional attitudes, who are many) to attack him effectively.

Josh

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

3.1.

Is Homeostasis the Answer?  (Re: Variations in the Idea of Conscious

Posted by: "SWM" wittrsamr@xxxxxxxxxxxxx

Thu Feb 11, 2010 7:03 pm (PST)



--- In Wittrs@yahoogroups.com, "iro3isdx" <xznwrjnk-evca@...> wrote:

> --- In Wittrs@yahoogroups.com, "SWM" <SWMirsky@> wrote:
>

> > On the use of "mechanism" I have invoked, the movement of the stream
> > involves some mechanism, too, in this case the way(s) in which the
> > molecular constituents of the stream operate at a deeper level. But
> > one could also speak of a stream's mechanism in more macro terms,
> > e.g., by referring to its behavioral tendencies.
>
> I think you are mostly confusing yourself here.
>

I suppose it wouldn't be the first time!

> The point is that we make our machines to follow our rules, and to
> resist external influence. And sure, the resistance is not unlimited,
> and a strong enough external influence can change it. So trains can
> derail, but not easily.
>
> Adaptive things are far more sensitive to small changes in the
> environment.
>

My point is that "adaptive" is not a basic function in the way causal relation is. I think your confusion is in somehow equating the two and then saying adaptation is the thing that's needed. But adaptation can probably be better explained as a function of causal complexity. If so, it cannot solve your problem.

> In some sense, we can be conscious to our world because we are
> sensitive to small changes in our world. The computer is unconscious,
> and essentially solipsistic, because it is largely oblivious to small
> changes in the world.
>

Again, the point of a Dennettian type model is to say that complexity enables computers to rise to the level of behavior we find in living organisms. It remains to be implemented and tested of course. But there's no sense arguing against the possibility on the grounds that living organisms operate differently than the current crop of computers. Of course they do! The issue is whether there is something about living organisms that can be replicated in computers.

>
> > If the homeostatic system's adaptive behavior is a function of
> > the operating mechanics of its constituents, which is hardly
> > an unreasonable supposition given what we know of chemistry and
> > physics, then there is no reason to presume that "adaptiveness"
> > is a stand-alone or otherwise basic competitor of "caused behaviors".
>
> I'm not sure what point you are making there. I have never suggested
> that homeostatic systems are exempt from causation.
>
>

If homeostasis leads to adaptation as you suggest and causality lies at the bottom of homeostasis, then there's no reason to suppose that causally driven computers cannot also achieve the kind of behavioral adaptationism that living organisms achieve.

> > In keeping with what I've already said, it seems to me that the
> > distinction you are making is wrongheaded. Whatever is adaptive is
> > so because of its underlying mechanisms which are describable as
> > algorithms (sets of procedural steps).
>
> I challenge you to accurately describe the adaptiveness in terms of
> algorithms.
>

The issue is to develop algorithms that enable adaptation. You have a conception of this that seems to hold that a computer can only be built to do exactly what is programmed into it and that does pretty much describe what we expect of most computers today. But the point of AI is to develop algorithms that do, in fact, adapt. Minsky has a whole slew of proposals in The Emotion Machine. Hawkins says the way to do it is to implement a relatively simple algorithm in a chip and then combine these chips in a complex array along the lines of how neurons are arrayed in brains. In either case algorithms are at the bottom of what is meant to be achieved.

You are arguing that efforts like these are foredoomed because algorithmically driven processes lack the capacity to adapt. But the point is to look at Dennett's model and note that it hinges on complexity. ("Complexity matters", he writes.) A sufficiently complex system would have the modular tools to deal with unanticipated inputs in new ways. We have seen on the Analytic list (I forget the exact actual reference unfortuntely) how at least one writer argues that introducing parallelism (as Dennett envisions to achieve the necessary level of complexity) introduces uncertainty, the possibility of new (unplanned for) outcomes.

If living systems are algorithmic at a genomic level too then even their adaptational capacity is causally grounded.

>
> > Anyway, and in keeping with my question, is the breakdown of the
> > underlying relations, relative to how we get consciousness, that
> > you want to give the following then:
>
>
> > Homeostasis produces Pragmatic Selection produces Perception produces
> > Adaptiveness produces Consciousness?
>
> No, that's far too simplistic. Homeostasis provide a way of making
> pragmatic judgments, but is not necessarily pragmatic on its own
> account. Pragmatic judgment provides a way of making the decisions
> needed to construct a perceptual system, but pragmatic judgment does
> not necessarily lead to perception. Perception is a requirement for
> consciousness, but perceiving systems are not necessarily conscious.
>
> Regards,
> Neil
>
> =========================================

So what is the feature that produces what we recognize as consciousness? The last element you gave us was "adaptation". Is that the critical feature or step? If it isn't what is and how does it relate to the underlying importance of homeostasis which is what you originally told me was the key?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Recent Activity
Visit Your Group
Yahoo! News

Odd News

You won't believe

it, but it's true

Cat Fanatics

on Yahoo! Groups

Find people who are

crazy about cats.

Yahoo! Groups

Going Green

Explore green tips

and resources

Need to Reply?

Click one of the "Reply" links to respond to a specific message in the Daily Digest.

Create New Topic | Visit Your Group on the Web

Other related posts:

  • » [C] [Wittrs] Digest Number 139 - WittrsAMR