AI Innovators Take Pledge Against Autonomous Killer Weapons

An autonomous tank is demonstrated in France last month. Leading researchers in artificial intelligence are calling for laws against lethal autonomous weapons. They also pledge not to work on such weapons.
An autonomous tank is demonstrated in France last month. Leading researchers in artificial intelligence are calling for laws against lethal autonomous weapons. They also pledge not to work on such weapons.
Christophe Morin/IP3/Getty Images

The Terminator's killer robots may seem like a thing of science fiction. But leading scientists and tech innovators have signaled that such autonomous killers could materialize in the real world in frighteningly real ways.

During the annual International Joint Conference on Artificial Intelligence in Stockholm on Wednesday, some of the world's top scientific minds came together to sign a pledge that calls for "laws against lethal autonomous weapons."

"... we the undersigned agree that the decision to take a human life should never be delegated to a machine," the pledge says. It goes on to say, "... we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons."

The moniker "autonomous weapons" doesn't draw the same fear or wonder as a killer robot, but weapons that can function without human oversight are a real concern.

In the pledge introduced by the Future of Life Institute, signees agree that AI developments can contribute to future danger. Those backing the pledge include SpaceX and Tesla founder Elon Musk and Google DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman. They are among more than 170 organizations and 2,400 individuals to take such a stance, according to the document.

"There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable," the pledge states. "There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual."

The position of the pledge is ultimately to encourage global governments to take legal action to ban any possibility of future destruction.

"Stigmatizing and preventing such an arms race should be a high priority for national and global security," the pledge states.

This isn't the first time AI leaders have expressed doubts.

Last August, corporate tech leaders wrote an open letter to the United Nations warning of an arms race in these weapons.

Such weapons, the letter said, "threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend."

Musk also signed that letter and he spoke on his strong beliefs concerning the inevitable dangers associated with AI. He urged U.S. governors to be proactive. "Once there is awareness, people will be extremely afraid. As they should be," he said in July 2017.

But not everyone agrees with Musk dire view. In an interview with NPR, Rep. Pete Olsen, R-Texas, expressed his optimism for the future of AI.

"These are machines that are learning over time from activities they've done," Olsen said. "They become sort of intelligent through that learning. This is the great value, great tremendous benefit for our country."

Copyright 2018 NPR. To see more, visit http://www.npr.org/.