Tuesday, January 14, 2014

Terminated: Machines 'Might Fight Us'

Terminated: Machines 'Might Fight Us'
Share this article

The human race faces a real danger from machines that get too smart, said a New York University professor in a network TV appearance. And he’s not talking about sci-fi movies.

“It’s likely that machines will be smarter than us before the end of the century – not just at chess or trivia questions, but at just about everything, from mathematics and engineering to science and medicine,” Gary Marcus told his New Yorker readership.

Marcus appeared recently on “CBS This Morning” with James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era”:

While some artificial intelligence already is ubiquitous – the iPhone’s Siri and Google Search – incredible changes are envisioned in the few decades.

Marcus warned that the changes will happen and that people need to be making preparations now.

“There’s the potential of machines that might fight us for resources,” he said. “It’s not guaranteed … you see the kind of ‘Terminator’ scenario, and people laugh at it because that’s science fiction. But we don’t actually have a guarantee that it won’t happen.”

Marcus added, “I think its really important to start thinking now to keep us from having that kind of scenario. Nobody has the perfect solution that will guarantee that machines do what we want them to do. [There's] already the problem of machines doing what we tell them to do rather than what we really want them to do.”

Investigate the growing trend of blending human and machine, called “transhumanism,” at the WND Superstore.

It’s not a new idea. Decades ago, Hollywood portrayed a computer with a mind of its own, HAL, in “2001: A Space Odyssey.” HAL warned his human operators, “I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.”

More confrontations with machines came in the 1983 movie “WarGames,” followed by “Her,” “I, Robot,” “Blade Runner,” “Almost Human” and others.

While the movies are fiction, drones are being developed, according to Barrat, that are autonomous and can “commit assassinations without humans in the loop.”

Soon will come battlefield robots with similar capabilities, he said.

“Are we ready ethically to introduce those machines into the world?” he wondered.

Barratt said “absolutely” there will be machines that evolve beyond a human’s ability to control them.

Fortunately, both CBS guests agreed that humans still have an advantage when it comes to logic. A human easily can answer the question, “Can alligators go over hurdles in a steeplechase?” But computers might not.

But, said Barrat, even the logic problem is being addressed in the development of advanced machines.

Marcus wrote in his New Yorker blog that there might be “a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine.”

“And they will be able to do it every second of every day, without sleep or coffee breaks.”

Marcus cited Barrat’s work: “A purely rational artificial intelligence, Barrat writes, might expand ‘its idea of self-preservation … to include proactive attacks on future threats,’ including, presumably, people who might be loathe to surrender their resources to the machine.”

If machines “will eventually overtake us,” added Marcus, “as virtually everyone in the A.I. field believes, the real question is about values: how we instill them in machines, and how we then negotiate with those machines if and when their values are likely to differ greatly from our own.”

The blending of human and machine capabilities already is the subject of vast research projects that contemplate changes in the actual makeup of a human being. In an audio series called “Something Transhuman This Way Comes: Genetic Engineering and the Ubermenschen (Super Men) of Tomorrow,” leading researchers share their specialized knowledge about what can be expected in the future and how to prepare for it.

Marcus quotes A.I. researcher Steve Omohundro in his warning.

“If it is smart enough, a robot that is designed to play chess might also want to … build a spaceship’ in order to obtain more resources for whatever goals it might have.”

Marcus continues: “Barrat worries that ‘without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,’ even, perhaps, commandeering all the world’s energy in order to maximize whatever calculation it happened to be interested in.”

No comments: