Re: [Wittrs] Wittgenstein on Machines and Thinking

  • From: "SWM" <swmirsky@xxxxxxxxx>
  • To: Wittgenstein's Aftermath <wittrs@xxxxxxxxxxxxxxxxxxx>
  • Date: Mon, 20 Jun 2011 14:25:12 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Sean Wilson <whoooo26505@...> wrote:
>
> [corrected version]
> 
> 
> 'Is it possible for a machine to think?' ... the trouble
> which is expressed in this question is not really that we
> don't yet know a machine which could do the job. The
> question is not analogous to that which someone might
> have asked a hundred years ago: 'Can a machine liquify
> gas?' The trouble is rather that the sentence, 'A machine
> thinks (perceives, wishes)' seems somehow nonsensical.
> It is as though we had asked 'Has the number 3 a
> colour?' (BB 47)
> 
> But a machine surely cannot think! - Is that an empirical
> statement? No. We only say of a human being and what
> is like one that it thinks. We also say it of dolls and no
> doubt of spirits too. Look at the word 'to think' as a tool.
> (PI §360)
> 
> SW

And the issue, Sean, would be whether a machine can be like us in a relevant 
way. Say a machine were built to speak to us in a thoughtful and autonomous 
way. (By "autonomous" I mean without being pre-programmed to give certain 
answers to certain questions under certain conditions.) Now we have a machine 
that is like us in a relevant way. Maybe it lacks a body like ours (it's not 
Commander Data). Maybe it lacks all our sensory capabilities because of 
different equipment to which it is attached. But if it has enough sensory 
capability to share enough of our world and language capability (for putting 
information into words we can understand) AND it has the capacity to learn and 
think about what it encounters and has learned, then if it answered questions 
intelligibly (without being programmed to the question, as it were) then what 
would the problem be?

Is it that "think" or "understand" are not simple terms with simple meanings? 
Well that's fine because a great many of our terms are not, even when applied 
to entities like ourselves.

Would you make the case that Wittgenstein, in the above passages, was saying 
that it makes no sense to say of a machine that it thinks?

But what about an ape, many of which have shown clear thinking behaviors. Or 
dogs? What about an alien organism from another planet? Could we not think of 
it as thinking merely because it is sharply different from ourselves?

If any of these can be said to think, why not a machine, too? Of course this is 
not to say that it would make sense to say of any old machine that it's 
thinking! My toaster certainly shows no signs of contemplation before browning 
my bread. Nor does my pc. But why would we not be able to say of some machines 
that they think, even if there are no such examples as of now?

SWM 


_______________________________________________
Wittrs mailing list
Wittrs@xxxxxxxxxxxxxxxxxxx
http://undergroundwiki.org/mailman/listinfo/wittrs_undergroundwiki.org

Other related posts: