The hype surrounding artificial intelligence (AI) tends toward sweeping declarations: It will revolutionize health care, transform transportation, automate mundane tasks and quash the world’s fake news problem.

But all this world-saving potential might come with a dark side that goes well beyond eliminating jobs. AI as it exists today—not some far off “superintelligence” akin to general human intelligence, but the kind of tech companies like Google are now deploying or that is plausible in the next five years—could be exploited by rogue states, terrorists and criminals. That is according to The Malicious Use of Artificial Intelligence, a report published earlier this year by AI experts at a number of institutions, including the University of Cambridge, and research firm OpenAI.

These AI-backed attacks will be both digital and physical. The latter could come in many forms, the report’s authors write. They include drones and other physical systems, such as the deployment of autonomous weapons systems and autonomous vehicles that have been overtaken and forced to crash. These are new kinds of threats.

“Lethal, autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line.”

Digital attacks aided by AI, on the other hand, will be more familiar—but made easier and more devastating thanks to AI’s rise. According to the report, “The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing trade-off between the scale and efficacy of attacks. This could expand the threat associated with labor-intensive cyberattacks (such as spear phishing).”

This scenario is not down the road a bit—it is arriving now. Sixty-two percent of security experts believe that AI will be weaponized and used for cyberattacks this year, according to a survey by U.S. software firm Cylance. Similarly, a group of AI scientists and experts wrote open letters to political leaders in Canada and Australia in November 2017 urging them to ban weaponized robots capable of autonomously deciding whether people live or die.

“Lethal, autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line,” an open letter to Canadian Prime Minister Justin Trudeau stated.

The group calls for an international agreement on banning such systems, Newsweek reported. Its letter ends ominously: “If developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. The deadly consequence of this is that machines—not people—will determine who lives and dies.”

Add to MyEdge (0)
ClosePlease login

No account yet? Register

Let's Work Together

Ready to start producing
Remarkable Results?

Are you being disrupted or are you disrupting?

Let's Talk