Pentagon Funds Our Future Robot Overlords
This time, in real life, teams of enthusiastic young hackers met in Las Vegas at DEF CON, also known as "the world series of hacking." It takes its name from the numerical naming sequence for defense alerts — from DEF CON 1 (lowest state of readiness) to DEF CON 5 (nuclear war is imminent).
But at its heart, DEF CON is really just a computerized game of Capture the Flag. The idea is to find, plug and defend security holes in your own server while seeking and exploiting the vulnerabilities of other teams’ servers.
Players get bragging rights, a shot at winning some cash and a showcase for their talents in the contest sponsored by the U.S. Department of Defense’s brain trust in the Defense Advanced Research Projects Agency.
This year’s DARPA Cyber Grand Challenge, at the Paris Hotel on the Vegas strip, reimagined that concept with code-writing, artificially intelligent, software-robots running on supercomputers.
AI as a science is gaining momentum because researchers understand the potential of thinking software. In theory, it could recognize context and create solutions on the fly, saving humans the time and trouble. While that’s all well and good, there is another way to think about artificial intelligence. And it’s not good.
The robot was a loyal servant in the classic ’60s sci-fi TV show Lost in Space, but Stephen Hawking fears real-life won’t imitate art. |
To promote pure efficiency, thinking machines might even have a motive to eradicate humans. As Hawking puts it: "A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble."
The organizers of the Cyber Grand Challenge see things differently. The Pentagon-funded event spent $3.75 million in prize money and another $55 million organizing. That entailed recruiting seven teams of security researchers from industry to academia, then designing and building supercomputers for each team as well as software that could monitor their software bots in real time. It was all very impressive and, according to organizers, well worth the money spent.
One software bot even managed to find and inoculate the Crackaddr bug, a piece of decade-old malware that had never been cracked before by a non-human.
A bot programmed by engineers from Raytheon (RTN) played an especially aggressive game. After it patched holes on its own server, it pounced on exposed vulnerabilities on multiple fronts, scoring points. However, its own deficiencies ultimately slowed processing power.
Get Your Complete Preparedness Kit
To help you get ready to take full advantage of the bull market of a
lifetime, I want to send you a complete Dow 31,000 Preparedness Kit —
four distinct free reports! The first free report spells out
step-by-step what you must do now to position yourself for amazing
profits (and protection) over the next two years. Click here to download now! -Larry Edelson |
Internal Sponsorship |
The commentators reasoned the bot chose this strategy because it determined there was no further benefit to be derived by patching and exploiting new holes. The most prudent course of action was to simply wait as other teams exposed vulnerabilities and could no longer effectively respond.
While it is impressive that a piece of software can reason at all, it’s also disconcerting. I don’t want to seem like an alarmist or pessimist given the very bright investment outlook for new technologies. Yet I hope the next wave of artificially intelligent software doesn’t simply wait for humans to reach the point where we can no longer respond to exposed vulnerabilities. That would not lead to a Hollywood ending.
Best wishes,
Jon Markman
No comments:
Post a Comment