Ethical AI For War? Defense Innovation Board Says It Can Be Done – In Military

Get started on your Homeland Security Degree at American Military University.

UPDATED from DIB press conference WASHINGTON: A Pentagon-appointed panel of tech experts says the Defense Department can and must ensure that humans retain control of artificial intelligence used for military purposes.

“Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems,” the Defense Innovation Advisory Board stated as its first principle of ethical military AI. Four other principles state that AI must be reliable, controllable, unbiased and makes decisions in a way that humans can actually understand. In other words, AI can’t be a “black box” of impenetrable math that makes bizarre decisions, like the Google image-recognition software that persistently classified black people as gorillas rather than human beings.

The board didn’t delve into the much-debated details of when, if ever, it would be permissible, for an algorithm to make the decision to take a human life. “Our focus is as much on non-combat as on combat systems,” board member Michael McQuade, VP for research at Carnegie Mellon University, at a press conference on the report.

In most cases, current Pentagon policy effectively requires a human to pull the trigger, even if the robot identifies the target and aims the gun. But the military is also intensely interested in non-combat applications of AI, from maintenance diagnostics to personnel management to intelligence analysis, and these systems, too, need to be handled responsibly, McQuade said.

So we’d boil the report’s fundamental principle down to this: When an artificial intelligence accidentally or deliberately causes harm — in the worst case, if it kills civilians, prisoners, or friendly troops — you don’t get to blame the AI and walk away. The humans who built the machine and turned it loose are morally and legally responsible for its actions, so they’d damn well better be sure they understand how it works and can control it.

“Just because AI is new as a technology, just because it has reasoning capability, it does not remove the responsibility from people,” McQuade said. “What is new about AI does not change human responsibility….You definitely can’t say ‘the machine screwed up, oh well.’”

Ethical, Controllable, Impossible?

Now, the proposition that humans can ethically employ AI in war is itself a contentious one. Arms control and human rights activists like the Campaign to Stop Killer Robots are deeply skeptical of any military application. Celebrity thinkers like Stephen Hawking and Elon Musk have warned that even civilian AI could escape human control in dangerous ways. AI visionaries often speak of a coming “singularity” when AI evolves beyond human comprehension.

By contrast, the Defense Innovation Board – chaired by former Google chairman Eric Schmidt — argues that the US military has a long history of using technology in ethical ways, even in the midst of war, and that this tradition is still applicable to AI.

“There is an enormous history and capacity and culture in the department about doing complex, dangerous things,” McQuade told reporters. The goal is to build on that, adding only what’s specifically necessary for AI, rather than reinvent the entire ethical wheel for the US military, he said.

“The department does have ethical principles already,” agreed fellow board member Milo Medlin, VP for wireless at Google. What’s more, he said, it has a sophisticated engineering process and tactical after-action reviews that try to prevent weapons from going wrong.  “The US military has been so good at reducing at collateral damage, has been good about safety, because of this entire [process],” he said.  “The US military is very, very concerned about making sure its systems do what they are supposed to do and that will not change with AI-based systems.”

“Our aim is to ground the principles offered here in DoD’s longstanding ethics framework – one that has withstood the advent and deployment of emerging military-specific or dual-use technologies over decades and reflects our democratic norms and values,” the board writes. “The uncertainty around unintended consequences is not unique to AI; it is and has always been relevant to all technical engineering fields. [For example,] US nuclear-powered warships have safely sailed for more than five decades without a single reactor accident or release of radioactivity that damaged human health or marine life.” (Of course, no one has put those nuclear warships to the ultimate safety test of sinking them).

“In our three years of researching issues in technology and defense,” the board continues, “we have found the Department of Defense to be a deeply ethical organization, not because of any single document it may publish, but because of the women and men who make an ongoing commitment to live and work – and sometimes to fight and die – by deeply held beliefs.”

Five Principles, 12 Recommendations

While the advisory board’s recommendations are not binding on the Defense Department, the Pentagon did ask for them. The board spent 15 months consulting over 100 experts – from retired four-star generals to AI entrepreneurs to human rights lawyers – and reviewing almost 200 pages of public comments. It held public hearings, roundtable discussions, and even conducted a wargame drawing on classified information before it came out with its five principles and 12 recommendations.

The principles are worth quoting in full – with some annotations to explain them. Defense Department use of AI, the board says, should be:

The original wording read as follows: DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.” This language isn’t just calling for “killer robots” to have an off switch. It’s suggesting military AI should have some ability to self-diagnose, detect when it’s going wrong, and deactivate whatever is causing the problem. That requires a level of awareness of both self and the surrounding environment that’s beyond existing AI – and predicting potential unintended consequences is difficult by definition.

The revised wording of the final report changed “disengage or deactivate….”  to the more specific “human or automated disengagement or deactivation.” In other words, the decision to hit the off switch could be made either by a human or by a machine.

“The DoD should have the ability to turn things off, to detect [problems] and turn them off through an automated system,” Medlin said. That safety feature doesn’t have to be an artificially intelligent autonomous system itself, he said: You could use conventional software with predictable IF-THEN heuristics to monitor the more intelligent but less predictable AI, he said, or even in some cases a hardware cut-out that makes certain actions physically impossible. (One historical example is how old-fashioned fighter planes had interruptor gear that kept their machineguns from firing when the propeller blade was in the way).

The original wording also allowed for a human to “disengage or deactivate” an errant system, Medlin continued, but fellow board member Danny Hillis “felt very strongly that the word human should be in there.”

The language itself is neutral on whether you should have a human, an automated non-AI system, or an AI controlling the off switch, McQuade said: “The principle that we’re espousing is that … you need to be able have a method of detecting when a system is doing something that it’s not intended to do.”

This article was written by Sydney J. Freedberg Jr. from Breaking Defense and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to [email protected].