A Code of Conduct for AI in Defense Should Be an Extension of Other Military Codes

commentary

Sep 11, 2019

Artificial Intelligence, Graphical User Interface, photo by kontekbrothers/Getty Images

Photo by kontekbrothers/Getty Images

This commentary originally appeared on Friends of Europe's Debating Security Plus 2019 Programme on September 10, 2019.

An AI code of conduct for defense should look a lot like all other defense codes of conduct.

Since 1948, all members of the United Nations have been expected to uphold the Universal Declaration of Human Rights, which protects individual privacy (article 12), prohibits discrimination (articles 7 and 23), and provides other protections that could broadly be referred to as civil liberties.

Then, in 1949, the Geneva Convention created a framework for military activities and operations. It says that weapons and methods of warfare must not “cause superfluous injury or unnecessary suffering” (article 35), and “In the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects” (article 57).

An AI code of conduct for defense could be a natural extension of these two foundational documents. Like other military programs, AI programs should aim to reduce the number of casualties in warfare and reduce the hardships to civilian populations by seeking to minimize effects on humanitarian infrastructure (like hospitals), critical infrastructure (like bridges, dams, and power grids), natural resources, and so on.

Meanwhile, the algorithms themselves should not have been created with training data that discriminates against (or for) any particular race, ethnicity, gender, religious group, or other demographic. Society has already seen how algorithms can become unintentionally biased or can be based on unethical training data, and we can learn from these lessons.

A global society that would create the Geneva Convention is a society that believes in a moral code for warfare, and this same moral code could extend into its weaponized algorithms.


Cortney Weinbaum is a management scientist specializing in intelligence topics at the nonprofit, nonpartisan RAND Corporation.

More About This Commentary

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.