[lit-ideas] Re: The Answering Machine

  • From: Donal McEvoy <donalmcevoyuk@xxxxxxxxxxx>
  • To: "lit-ideas@xxxxxxxxxxxxx" <lit-ideas@xxxxxxxxxxxxx>
  • Date: Thu, 12 Jan 2012 13:06:35 +0000 (GMT)




----- Original Message -----

>> A Turing test is based on the idea, you've got this right, that
if you talk with the machine can can't tell the difference between
talking with a machine and talking with a person, you call the
machine intelligent.>>


The fallacy in this argument has been, I think, recently mentioned: that a
machine can simulate intelligent behaviours, even to any degree of physical 
specification, does not mean the
machine is intelligent in the sense of human conscious intelligence – for human
conscious intelligence is at a level [in Popper's terms, the level of 'World
2'] that is not reducible to a merely physical specification [in Popper's
terms, the level of 'World 1']. We may, mistakenly, view or interpret such
simulated intelligent behaviours as reflecting the intelligence of the machine –
perhaps because we do not know it is a machine; or perhaps because, though we
know it is a machine, we do not accept there is any level to ‘intelligence’
beyond a World 1 level of, say, ‘processing’. In the latter case, where
machines can calculate more quickly and reliably than humans, we should have to
admit – in view of their greater World 1 processing capability – that some
computers are more intelligent than humans. Indeed, Einstein said, “My pencil
is more intelligent than I.” 
 
But what Einstein surely meant by this was that by using a
pencil (and paper) to put down his thoughts and calculations, he put them in a
form in which he could examine and assess and revise them in a way that was
much more productive than if Einstein had been confined to doing all his work 
‘in
his head’. The productiveness of ‘objectifying’ the contents of our World 2
thoughts into some ‘objective’ World 3 content, which may be then criticised
and revised, is far from an experience unique to Einstein – almost every writer
or scientist or artist or schoolchild will have had the experience of seeing
their work transformed as it moves from initial ‘thoughts’ to some World 3 
‘objective’
content. The process of producing their work is not simply a process of 
‘objectifying’
some initial thought but an interactive process of continual feedback between
some person’s World 2 activity and some ‘objectified’ World 3 objects. 
 
This kind of interactive process is beyond the ‘processing’
of a computer, which processes merely at the level of World 1. In Popper’s view,
a computer is merely a glorified pencil and paper – a tool we can use to aid
our own interaction with World 3, and to store and process World 3 content.
 
But Eric’s post gives rise the question: can a machine
simulate intelligent behaviours, even to
any degree of physical specification? Clearly computers can simulate some
kinds of intelligent behaviours – e.g. calculating and ‘computation’ – to a
degree that matches, and even exceeds, human capabilities. But what about
arguing? Here we might guess that the limitations of the computer would easily
be shown up, as Eric indicates. The computer might simulate argument through a
series of programmed responses (and evasions) but could it simulate pursuing an 
argument as the course of
the argument evolved? Not beyond its programmed responses and evasions. And
this would show, as those programmed responses are merely based on World 1
processes [e.g. if they emit sounds ‘X’ then respond by emitting sounds ‘Y’],
that the computer is not actually following the argument in terms of its World
3 content and so cannot adequately respond to developments in that content.
 
For a computer to argue as humans do (and not merely
simulate arguing by way of a programmed response) then the validity of the 
logical
standards used in the argument would have to be reducible to a World 1 level of
‘processing’ or ‘programming’: but, before Christmas, we looked at Popper’s 
“Revised
Version of an Argument By Haldane”, which explains why this cannot be the case –
not because logical standards cannot be physically embodied in a World 1
programme [such as one on a computer] but because the validity of those logical 
standards cannot be explained in, or
reduced to, purely physical [or World 1] terms.
 
We might use a computer to access or process information; but
we no more argue with a computer than we do with a pencil and paper. Whatever
their sophistication, computers are no more intelligent than a pencil and paper
in the terms of World 2 conscious intelligence (they have none), and no more
able than a pencil and paper to understand the World 3 content they ‘process’
(they have no such understanding).
 
Donal
London

Other related posts: