[Wittrs] Biology, AI, as research programs

  • From: "iro3isdx" <xznwrjnk-evca@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Mon, 22 Mar 2010 21:08:57 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, "gabuddabout" <wittrsamr@...> wrote:


> From recent discussion I've gleaned that there are two noncompeting
> research programs in the form of weak AI (how do we get artifacts
> to do what we want and what programs are necessary for exactly
> that) and biological naturalism a la Searle such that we look for
> "neurobiological correlates of consciousness" with our mind meters,
> say.. (NCCs) in order to find actual causal mechanisms later.

Both programs are likely to fail.  The trouble with biological
naturalism, is that it is a part of philosophical naturalism, so  isn't
really a scientific research program.  If it is research  at all, then
it is research in playing word games.  That will  accomplish nothing.

In principle, the weak AI program could work.  In practice, it is
unlikely to succeed.  AI folk, cognitive psychologists and some
philosophers are led astray by the fact that it could work in
principle.  They are failing to look carefully at the requirements.

The trouble with the weak AI program, is that it relies on  an enormous
amount of innate knowledge.  The innate knowledge  requirements are so
large, that they could never be met by any  practical system.  And then
there is the question of whether a  system based on innate knowledge
could actually be conscious.  The Chalmers Zombie arguments cast doubt
on that question of  consciousness.  The Searle argument is intended to
also question  that, though I think it not very effective in doing so.
If you  look at the discussions surrounding AI, you will see many people
arguing that intentionality does not exist, that consciousness  is
merely an illusion, etc.  So I think it is a fair conclusion  that many
AI proponents intuitively sense that their methods will  not obviously
lead to consciousness or intentionality, though they  may still hold out
hope that intentionality and consciousness will  arise in a non-obvious
way if they can get the behavior right.

The alternative to the use of innate knowledge is a learning system.  AI
folk do understand that learning is required.  However the  research
results in machine learning are unpersuasive.

For myself, I got interested in this area via an interest in  learning.
I think I know what is required for learning, though I  find myself in
strong disagreement with the machine learning people,  with the
epistemologists, and with the behaviorists from psychology.  Roughly
speaking, those groups all see learning as some kind of  induction (the
behaviorists call it "conditioning" but it seems to  amount to the same
thing).  However, induction is a very weak highly  fallible method that
cannot hope to account for the effectiveness  of human learning.


> And as far as myself, you, and Gordon know, weak AI is the holy grail
> these days and Searle doesn't argue a priori against its possible
> success, though I recall you have expressed a bit of reservation as
> to its possible success--and it may be the same sort of reservation
> Putnam has in mind considering the problem of abductive reasoning
> on the part of any AI system.

Count me as skeptical of abductive reasoning.  I don't think there  is
any such thing.  As far as I can tell, abductive reasoning is a  kind of
"god of the gaps".  It is a term that one pulls from one's  hat to
"explain" an advance in scientific knowledge that does not  fit what the
epistemologists present as the methology of science.


> I've recently learned that it is Hacker and Bennett who try to argue
> that the very thesis of Searle's biological naturalism is incoherent,
> i.e., that it is a mereological fallacy to suggest that the brain
> causes consciousness.

They might be right about that.  I gave a link to some of my ideas  in a
post on Saturday to this group
<http://groups.yahoo.com/group/Wittrs/message/4784> .  In the draft I
linked to,  I call on a particular mathematical dualism to show how you
can  have properties of the whole that are not obvious from the parts.

For myself, I see consciousness arising out of the interaction  between
a person and his/her world.  A brain, by itself (e.g. a  brain in a vat)
does not have such interactions.


> My suggestion, again, is that they are noncompeting research
> programs.


> Anyone arguing otherwise is itching for a debate.

That they are incessantly debating one another, might be seen  as an
argument that they are in competition.

In fact, I think they are in competition.  They are not competing to
find a solution; they are competing to win supporters.  And the main
reason both are competing this way, is that both are failed research
programs.  If they were succeeding, then they would be getting about
their business of solving the problems instead of debating them.

Regards,
Neil

Other related posts:

  • » [Wittrs] Biology, AI, as research programs - iro3isdx