Governments around the world should stop research on lethal autonomous machines before it’s too late, according to a United Nations human rights expert. A group of researchers, including experts on drone technology that work for the Pentagon, warn that killer robot technology could arrive before nations have time to think through the implications.
Christof Heynes, a UN special rapporteur urged world leaders to press pause on research, saying “Time is of the essence.” Speaking to reporters in Geneva, he said programming machines to kill without humans making decisions could encourage more wars and make it more difficult to hold anyone accountable for war crimes. Autonomous, non-lethal systems are being developed by the Pentagon already. Their envisioned use includes crowd control while delivering humanitarian aid to danger zones.
Human Rights Watch wants an all-out ban on killer robots. But a different camp says the Pentagon ought to continue research and development because the U.S. can’t anticipate what it will need in future conflicts.
How quickly is the technology moving? Can a machine with artificial intelligence make the same battlefield judgement as a human soldier? Why can’t a human be kept in the loop, as occurs with current drone technology?
Bonnie Docherty, Senior Researcher, Arms Division, Human Rights Watch; Lead author on "Losing Humanity: The Case against Killer Robots," a joint publication on fully autonomous weapons that was jointly published by Human Rights Watch and Harvard Law School's International Human Rights Clinic
Christopher Harmer, Senior Naval Analyst, Institute for the Study of War; Previously, Harmer served for 20 years as a career officer in the U.S. Navy