Experts Warn Against Fully Autonomous War Machines
By PNW Staff September 05, 2017 Share this article:
In August of this year, a loose coalition of tech company CEOs and
artificial intelligence experts wrote an open letter to the UN
Convention on Certain Conventional Weapons. Their concern is the
development of autonomous weapons systems--robot war machines capable of
locating and killing without human intervention.
Elon
Musk, who has become in some ways the public face of this movement,
tweeted out "If you're not concerned about AI safety, you should be.
Vastly more risk than North Korea." Yet this is a technology that
continues to advance at a pace that few people are even aware of.
Most
military forces already possess military drone technology and the
United States military has made extensive use of unmanned hunter-killer
drones to bomb terrorist targets in Asia and the Middle East. The key
difference is that the Predator and Reaper drones of the Airforce are
controlled by remote pilots on the ground whereas the next generation of
military robots may remove the human element from the picture entirely.
At
present, human pilots on the ground pilot the airframes, select
targets, receive clearance from commanders to use lethal force and then
ultimately pull the trigger, just as a conventional fighter pilot would.
Despite the fact that current military law also requires this, weapon
systems are in development that would select and eliminate targets on
their own.
One small such drone under development scans the battlefield
for radio signals and when it detects them, immediately dives and
detonates a grenade. No human control required. Others operate in swarms
of armed drones that communicate with one another to coordinate and
select targets.
The Pentagon's plans to merge
artificially intelligent machines into drones, tanks, boats and a
human-like robots would push strategic and tactical decision making away
from human control in what experts are calling the "third revolution in
warfare."
Able to make decisions more quickly
than any human commander while considering thousands of strategic
variables, the use of coordinated AI would confer a vast advantage over
less capable armies on the same level that nuclear armed militaries with
air superiority outmatch conventional, ground-based armies. As soon as
the strategic advantage becomes clear, the rules preventing its use will
be abandoned, experts believe.
The United
Nations is taking the threat of an arms race serious as well, with a
meeting set for November of this year to discuss the ramifications of
continued development of artificially intelligence, coordinated weapons
systems. A panels of governmental experts on "lethal autonomous weapons
systems" has also been convened.
The open
letter written by the 116 AI experts called on this new committee to
"work hard at finding means to prevent an arms race in these weapons, to
protect civilians from their misuse, and to avoid the destabilizing
effects of these technologies." But many fear that any ban is
"unworkable and unenforceable".
Amir Husain,
Founder and CEO of AI company SparkCognition, is a signatory of the
letter but warned that an outright ban would "stifle progress" and
innovation. He believes that "the solution--as much as one exists at
this stage--is to redouble our investment in the development of safe,
explainable and transparent AI technologies."
The
general manager of SparkCognition's defense division, Wendy Anderson,
also reminds us that any ban on development that the United States (or
any other country) accepts would put the US at a competitive
disadvantage--or simply drive the development underground to be
conducted in secret with less controls. The sentiment that "We cannot
afford to fall behind" isn't felt just by Anderson.
The possession of powerful artificial intelligence and
coordinated, autonomous combat systems would give a clear military
advantage, but experts warn that the Pentagon does not fully comprehend
the risks that uncontrolled AI represents. "AI is not just another
technology," Andy Ilachinski, head research scientist at the Center for
Naval Analyses, said in a recent interview. He went on to say how it is
on the cusp of transforming the world on the level of the printing press
or the Internet.
Unlike with conventional computer systems, the next generations of artificial intelligence develop their own solutions to problems in a way that is opaque to programmers. As the machine evolves itself to be more efficient and capable it also moves farther from human understanding and control.
The
dangers of fully autonomous weapons systems are multiple from hackers
to conflict escalation and even to extinction level events. Despite the
risks, the advantages of such systems are so great that once Pandora's
Box has been opened, there may be no way to close it.
Hackers:
imagine a team of Russian hackers and that overrides the control stream
of a division of combat drones, tanks and robotic soldiers. Weapons
could be turned on friendly populations, destroyed or simply integrated
into the Russian military. Surveillance drones have already been hacked
by Iranian hackers, redirecting them to Iranian airstrips.
Escalation
and endless conflict: in democratic societies with conventional
weapons, casualty figures are a strong argument for peace. Foreign wars
that send thousands of young men and women home in coffins become
politically untenable.
With wars fought entirely by robots, there is no such
political resistance to conflict. Strong countries would be free to bomb
and murder less advanced countries with impunity and two countries
fielding robotic armies could become locked in an ever escalating and
bloody war with no end.
AI singularity: a far
worse result, and which Elon Musk and the other 115 signatories of the
letter to the UN consider a very real threat, is the possibility of an
artificial intelligence that advances beyond human control. It is
frightening to think that Skynet, the antagonist of the Terminator
series, now has a very real possibility of becoming a reality. A
self-aware, superintelligence in control of the nation's military would
be, as analysts warn, an extinction level event more dangerous even than
nuclear weapons.
In the past year, AI has
succeeded at defeating the best human players in go, the board game of
strategic thinking, a game far more difficult for machines than chess.
Already better at humans in medical diagnosis (IBM's Watson) and target
recognition, AI chatbots tests by Facebook recently were observed to
quickly develop their own language more efficient than English but
almost immediately unintelligible to researchers.
No comments:
Post a Comment