[lit-ideas] Re: [lit-ideas] Re: [lit-ideas] Æsthesis

  • From: Omar Kusturica <omarkusto@xxxxxxxxx>
  • To: lit-ideas@xxxxxxxxxxxxx
  • Date: Sat, 11 Apr 2015 12:00:45 +0200

Autonomous war machines that choose targets and destroy them without human
intervention are increasingly likely. Prototypes include, for example, the
simple, fire-and-release guided missles already employed in aerial combat.
More sophisticated versions include sensors and pattern-recognition
software that allow them to discriminate between friendly and enemy
aircraft. Autonomous drones may take this development a step further,
cruising in search of possible targets. Whether the software will be able
to do a better job discriminating between, say, a wedding party and a group
of terrorists than the current generation of human operators remains to be
seen. Philosophical debates about whether these machines are making choices
in a human sense will be largely irrelevant to the dangers they pose.

*I am not sure that it will be irrelevant. In fact, it might be pertinent
to these debates whether these machines make choices, and how they do so.
For example, we might want to know whether the 'decisions' they make are
determined by their programming or whether there is some autonomous
decision-making capacity. In the first case, the legal / political / moral
responsibility is assigned to the human programmers and their political
sponsors, regardless of the fact that they were not present 'on the spot.'
(Just as we don't blame the bullet but the person who fired it.) In the
second case, the things might be more murky. In that case, we might
urgently consider banning such autonomous decision-making machines as they
would probably make it impossible to hold the war-faring establishment to
responsibility in any meaningful way.

O.K.

On Sat, Apr 11, 2015 at 11:26 AM, John McCreery <john.mccreery@xxxxxxxxx>
wrote:

To Donal,

Allow me a mild objection. I do not advocate materialism. I am not at all
sure what "materialism" means these days. The collapse of the Cartesian
separation of mind and matter affects both sides. Whatever "materialism"
might now mean, a world that includes quarks, strings, bosons, and quantum
tunneling effects is not the world of discrete atoms envisioned by
Democritus, the world of simple machines envisioned by Newton, or the world
of steam, pumps and tubes envisioned by Freud and macroeconomists. If I
sometimes use the phrase "material conditions," I am simply at a loss
for other words to describe what the world imposes on our perceptions,
reactions, decisions or other stuff that we take to be going on inside
ourselves. I adopt the position of Karl Marx in the 18th Brumaire, that
while men make history, they do so under conditions not of their own
choosing.

To Omar,

I have long had an amateur interest in machine intelligence. Here again, I
find little merit in discussions that attempt to define intelligence
abstractly. I find it much more interesting to consider what those working
to develop machines that simulate intelligent behavior are up to these
days. I adopt the position of Warren McCulloch in the introduction to
*Embodiments
of Mind,* where the founder of automata theory observes that he tries to
build machines that will do human things. They always fail to do many
things that humans do. Then, he says, there are always those who leap to
the conclusion that machines will never be able to do what humans do. He,
however, goes off and tries to build a better machine.

When I hear someone trot out the old chestnut that machines can only do
what we program them to, I wonder how much they know about, for example,
recent advances in genetic programming and evolvable machines. In this
field, the aim is to enable the computer itself to develop algorithms with
results that the programmer cannot predict. Achievements in this area
include solutions to mathematical puzzles that have stumped human
mathematicians.

Autonomous war machines that choose targets and destroy them without human
intervention are increasingly likely. Prototypes include, for example, the
simple, fire-and-release guided missles already employed in aerial combat.
More sophisticated versions include sensors and pattern-recognition
software that allow them to discriminate between friendly and enemy
aircraft. Autonomous drones may take this development a step further,
cruising in search of possible targets. Whether the software will be able
to do a better job discriminating between, say, a wedding party and a group
of terrorists than the current generation of human operators remains to be
seen. Philosophical debates about whether these machines are making choices
in a human sense will be largely irrelevant to the dangers they pose.

Now there is a grim thought.

John


--
John McCreery
The Word Works, Ltd., Yokohama, JAPAN
Tel. +81-45-314-9324
jlm@xxxxxxxxxxxx
http://www.wordworks.jp/

Other related posts: