[Wittrs] Mirsky on Watson and Seale

  • From: Wittr2Feed <wittrs2feed@xxxxxxxxxxxxxxxxxxx>
  • To: language goes on holiday <wittrsfeed@xxxxxxxxxxxxxxxxxxx>, "wittrs2feed@xxxxxxxxxxxxxxxxxxx" <wittrs2feed@xxxxxxxxxxxxxxxxxxx>, "Philscimind@xxxxxxxxxxxxxxxxxxx" <Philscimind@xxxxxxxxxxxxxxxxxxx>
  • Date: Mon, 21 May 2012 14:49:25 -0700 (PDT)

(forwarding this)

link: http://www.rockawave.com/news/2011-03-11/Columnists/The_Rockaway_Irregular.html

The Rockaway Irregular

Beautiful Minds
Commentary by Stuart W. Mirsky

A few weeks ago technology took another great leap forward when IBM’s 
computational platform, “Watson,” using 90 linked high powered computer 
processors in a massively parallel array, sophisticated natural language 
programming and a humongously encyclopedic database beat two 
human Jeopardy! champions on national TV. I don’t watch the show as a rule but 
my son, who has an interest in cognitive science from his college days, called 
to say you gotta see this, Dad!

He knew I shared his interest in the possibility of artificial intelligence 
and, even more, artificial consciousness, and that one of the things that’s 
always fascinated me is what it means to have a mind. Silly question right? 
Except it isn’t because being the thinking, knowing, aware creatures that we 
are is not as easily explainable as the rest of the stuff we know about the 
universe. We can explain the way physical things work, we know how to put 
humans into space, study the stars through massively sophisticated telescopes, 
deconstruct the mechanics of biological organisms, and cure illnesses that 
tormented mankind for centuries. We can build vast cities with skyscrapers and 
superhighways. And we’ve got jet planes and submarines, rockets and nuclear 
technology. But we still don’t understand ourselves.

Yes, we know we’re biological organisms and that genes provide the blueprints 
for what we are, that certain organic molecules combine to produce each of us 
according to preset genetic plans encoded in our DNA in a series of biological 
processes. But what enables us to know it? What is there about the particular 
biological device in our heads we call a brain that makes us more than just a 
mobile piece of meat, an organic robot? Why and how do we know, feel and think 
about anything at all?

Roughly a decade ago IBM’s Big Blue supercomputer beat the reigning human chess 
champion, Gary Kasparov, in a series of games that set pundits abuzz. But Big 
Blue didn’t win by out-thinking Kasparov but by out processing him. The 
computer program simply crunched more possibilities faster and with more 
efficiency than his human brain could. Our brains are slower than computers, in 
any case, because they pass electrical charges from neuron to neuron chemically 
rather than electronically as computers do. Yet even slower, we still have 
something computers lack — Big Blue included.

So IBM’s newest supercomputer based program (named for one of IBM’s founding 
fathers, not Sherlock Holmes’ famous sidekick) awed us when it moved beyond 
chess to actually compete successfully with humans in answering unpredictable, 
often complex, real language questions. It did it not by having pre-programmed 
answers (the way computers usually do it) or even by relying on decision trees 
the way expert systems do. It relied, instead, on a natural language program 
that’s adept at determining meanings in words, using a complex associative 
process to select and develop appropriate responses from data stored in its 
memory banks. Jeopardy! questions are famously nuanced and ambiguous, depending 
on implication and allusion. Their scope and unpredictability make 
pre-programming the right answers all but impossible. Still, “Watson,” like Big 
Blue before it, won.

Shades of The Matrix in which super intelligent machines take over the world, 
turning humans into batteries to be their power source! Isthat our future then?

Not to worry says renowned philosopher John Searle who teaches philosophy of 
mind at the University of California in Berkeley. Invoking his longstanding 
argument that computational processes amount to no more than what you get if 
you lock a man in a room with a set of rules for matching inputted symbols, 
whose meanings he cannot fathom, to other equally opaque symbols, Searle 
assures us that “Watson” not only doesn’t understand anything but cannot reach 
a point where it does.

Writing in the February 23rd edition of the Wall Street Journal, Searle notes 
that symbol matching, based on rules relying on nothing more than a symbol’s 
shape (or other non-meaning related criteria), is merely syntactic while 
grasping meaning involves more. No computer, he stresses, can ever achieve that 
extra something because computers operate entirely by syntax.

But IBM’s “Watson,” run on a massively parallel system and built to respond to 
a broad range of natural language questions via implication, allusion and so 
forth, does seem to be a bit more than a mere symbol matching device. Searle is 
surely right that “Watson” doesn’t know things the way we do. It doesn’t even 
know it won its game, as he put it in the recent article. But then it wasn’t 
built to. The real issue is what would be needed for it to know things, what 
would have had to be engineered into it by its makers? And here Searle has 
little to offer.

What’s missing, he tells us, is something we all find in ourselves but he never 
attempts to break that down and ascertain what it is. As with pornography in 
the famous Supreme Court decision, he seems to believe we know it when we see 
it. Sometimes we call it “awareness” (though others may think of it as 
“feeling” or “intentionality,” etc.). When we think about anything, we “see” it 
(or something about it) in our minds. We have mental pictures which kick up 
other mental pictures in a stream-of-consciousness process of ongoing 
associative events.

Searle argues that computers can never have that because their underlying 
processes are just rote symbol matching, nothing more. But the fact that a 
computer’s underlying operations are “syntactic” may say less about its 
supposed inability to mimic the human brain than John Searle imagines. In fact, 
in his long career he has never yet given an account of what having mental 
images, having the capacity to “see” with our minds when we understand 
something, actually amounts to — nor any reason to think that the basic 
operations in brains aren’t syntactic, too.

What if a sufficiently complex and layered computer program, using the same 
basic syntactic processes available to all computers, could develop and use 
representational models of its world and its various internal systems and 
components (the way we’re aware of the elements of our world and all our aches 
and pains and other somatic sensations)? What if this were then integrated with 
a “Watson”-like natural language program and the same massive database of 
stored inputs? Why should we think that that system, now able to image itself 
and the world, as well as the myriads of relations obtaining between these 
different layers of representation, would not be able to understand what it 
means to play and win games, too?

Searle’s reassurance aside, the fact that “Watson” 1.0 can only 
beat Jeopardy! contestants in an uncomprehending way really says nothing about 
what some future “Watson” 7.0 — or higher — might accomplish. And maybe that 
ought to be no more worrying to us than putting men on the moon, decoding the 
human genome or discovering antibiotics turned out to be.

--------------------------
Wittrs mailing list
Wittrs@xxxxxxxxxxxxxxxxxxx
http://undergroundwiki.org/mailman/listinfo/wittrs_undergroundwiki.org

** Note: This message was forwarded to Wittrs by the Editorial Board, so that 
members might enjoy or comment upon it. This is a common practice. If the 
message came from another list or rss feed, the link(s) should appear above. In 
such a situation, the original author may not see your reply. Members of Wittrs 
are encouraged to visit the link(s)that are fed here.

Other related posts:

  • » [Wittrs] Mirsky on Watson and Seale - Wittr2Feed