[Wittrs] Re: On computation

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 02:05:47 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:

> So maybe you think he is misguided in seeking mechanisms in the brain
> which would make sense of how we (in fact for him) have bona fide
> mental events, contra the weak AIers, some of who being inspired
> by Wittgenstein and, like you wrote, don't necessarily distinguish
> between strong and weak AI.

What would make a mental event bona fide (as, say, distinct from
ascribed)?  I doubt that we will ever have a "mental event" meter.  We
will presumably get better at measuring neural events, but  those neural
events will turn out to be not quite the same thing  as mental events.

I don't see a real distinction between weak AI and strong AI.  If strong
AI will never be achieved with computation, then weak AI  will never be
achieved with computation.  AI people may continue  to make progress in
emulating human behavior.  But I expect we  will always be able to look
at the results and see that they are  distinctly different from actual
human behavior.

> I would appreciate a comment on my reply to Stuart today on the
> nature of functionalism being spelled out entirely in second-order
> properties where the computations are concerned.

I don't have much to say on that.  Much of what is discussed in  the
various versions of functionalism seems to me to be misguided.  I
suppose you could put me in the interactionalist camp.  Hmm, I just
tried a web search on "interactionalism", and came up with a page
equating that to dualism.  So, no, I am not talking about that kind  of
interactionalism (interaction between body and mind).  Rather,  I am
talking about interaction between person and world.  Yet I  find it
useful to consider that interaction in terms of functions.

The trouble with AI, cognitivism, and much of philosophy of mind,  is
that they concentrates too much on mental events, and not enough  on how
we interact with the world.  The trouble with behaviorism  is that it
concentrates too much on how we interact with the world  and tries to
ignore mental events.

> Do systems repliers have to fuzz the distinction between software
> and hardware?

There are two kinds of hardware.  There is the internal hardware
(memory, processor, etc), and there is the external hardware (the
peripheral devices that pick up input and that perform behavior).

For the internal hardware, the distinction between hardware and
software is never that clear, in the sense that an engineer can  choose
whether to put some features in hardware or in software.  For a given
system, the distinction is clear.  But, in functionalist  terms, it does
not matter much.

For external hardware, the situation is different.  AI people tend  to
relegate the external hardware to the role of peripheral devices,  and
mainly concentrate on what goes on internally.  For the brain,  by
contrast, the "external" hardware is in the forms of billions of
sensory cells, and there is no clear boundary between the sensory  cells
and the neural system.  The systems repliers tend to see "the  system"
in terms of internal structures.  However, since semantics  is closely
linked to how we interact with the world, it is far from  clear that we
can just take the external interface for granted.


Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: