“Send us your ideas!” That was the open call for submissions about emerging technology's role in global order put out last summer by the National Security Commission on Artificial Intelligence (NSCAI). RAND researchers stepped up to the challenge, and a wide range of ideas were submitted. Ten essays were ultimately accepted for publication.
The NSCAI, co-chaired by Eric Schmidt, the former chief executive of Alphabet (Google's parent company), and Robert Work, the former deputy secretary of defense, is a congressionally mandated, independent federal commission set up last year “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.”
The commission's ultimate role is to elevate awareness and to inform better legislation. As part of its mission, the commission is tasked with helping the Department of Defense better understand and prepare for a world where AI might impact national security in unexpected ways.
Following this mandate, its commissioning of outside-the-box input came with an unusually broad and public request: “We need to hear original, creative ideas that challenge the status quo, shake our assumptions, and will cause us to reconsider the arguments we've already heard and hear new arguments in a different light.”
Never fearful of the bold and the new, RAND quickly hosted a competition within its ranks for best ideas. It worked. After presenting a number of insightful essays for consideration, nine were accepted for publication by NSCAI's media partner on this effort, War on the Rocks (WOTR), and a tenth was accepted by The Strategy Bridge. Both are highly regarded national security policy publications. The essays range in scope and subject matter, from military deception, to open-source research (meaning that it would be freely and publicly available even with matters of national security), to how to train AI soldier robots, to the role of chess in AI, among other detailed proposals.
The NSCAI sought new, challenging ideas on any of five topics that they called “prompts”: finding or creating a coherent vision of the future of war and competition; understanding what capabilities are needed to better develop AI; how institutions, organizational structures, and infrastructure will affect the development and adoption of AI; establishing norms about emerging technology; and the population's support, as well as the private sector's involvement.
Each prompt has subsets, noting specific queries and ways in which to address them such as, “What might happen if the United States fails to develop robust AI capabilities that address national security issues?”
Here's a rundown of RAND essays that answer some of the different questions posed:
Jasmin Leveille writes about embracing open-source military research to win the AI competition. “Unless the U.S. government significantly leads or trails the AI community—which is unlikely—it faces only minor risks in releasing its algorithms,” he argues. Otherwise, “The alternative to an open research strategy risks leaving the United States trailing by a wide margin.”
Danielle Tarraf writes that our future lies in making AI robust and verifiable. She argues for more AI verification to inure trust: “Algorithms are fragile, and the very science of verification to certify that they perform as desired is still inadequate, especially where black box AI algorithms are concerned. We should not trust what is not robust, and we cannot trust what we cannot verify.” To that end, she says, science needs to catch up to engineering.
Edward Geist and Marjory Blumenthal write on military deception: AI's killer app. They say that technologies of misdirection are winning: “Rather than lifting the ‘fog of war,' AI and machine learning may enable the creation of ‘fog of war machines'—automated deception planners designed to exacerbate knowledge quality problems.”
Daniel Egel, Eric Robinson, Charles Cleveland, and Christopher Oates write on AI and irregular warfare: an evolution, not revolution. They ask if AI will change the ways wars are fought. “The United States should proactively shape AI's impact on the next generation of irregular warfare to our advantage through a few key steps,” they say. These steps include better capturing innovation in the commercial ecosystem; recruiting and retaining personnel capable of leveraging AI capabilities; and working with allies (international coordination).
Rand Waltzman and Thomas Szayna discuss managing security threats to machine learning. “The rush to implement and field insecure systems containing advanced machine learning components introduces dangerous vulnerabilities that will be exploited by nefarious actors in ways we have barely begun to understand,” they write. In short, they argue, “It is high time that the issue of vulnerabilities in machine learning technologies is treated as a critical national-level concern.”
Patrick Roberts writes on AI for peace. He says, “The United States should apply lessons from the 70-year history of governing nuclear technology by building a framework for governing AI military technology.” Going further, he argues, “An AI for Peace program should articulate the dangers of this new technology, principles (e.g. no kill, human control, off switch) to manage the dangers, and a structure to shape the incentives for other states (perhaps a system of monitoring and inspection).”
Andrew Lohn writes about what chess can teach us about the future of AI and war. “[Chess] has been teaching military strategists the ways of war for hundreds of years and has been a testbed for AI development for decades,” he notes. In combat “AI-enabled computers might be an equalizer to help underdogs find new playable options.”
James Ryseff writes about how to recruit talent for the AI challenge. He says, “The Defense Department (DOD) directly competes with American technology companies for a limited pool of cyber and AI talent—a competition it all too often loses.” To win, “the Defense Department's success in deploying the most innovative AI technology will depend on its ability to embrace a culture of creativity, innovation, and self-improvement.”
Thomas Hamilton writes about how to train your AI soldier robots. “As the capabilities of AI-enabled robots increase, how will we organize, train, and command them—and the humans who will supervise and maintain them?” he asks. It may go beyond human emulation. “Robots with new, sophisticated patterns of behavior may require new forms of organization,” Hamilton writes. Ultimately, “The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.”
Christopher Paul and Marek Posard write about artificial intelligence and the manufacturing of reality. They say there are “flaws humans carry with them in deciding what is or is not real. The internet and other technologies have made it easier to weaponize and exploit these flaws, beguiling more people faster and more compellingly than ever before.”
The next phase of the NSCAI's ideas framework is for a few researchers—both from RAND and elsewhere—whose essays were selected for publication in WOTR to testify before the Commission, which, in turn, reports to Congress, the executive branch, and “the American people,” according to Schmidt and Work's original call for entries. Stay tuned. That means RAND researchers could be sharing keen, challenging insights from their advanced work on AI not only via provocative essays; they may be lending their voices, too.
— Thomas Kostigen