Taking the Measure of AI and National Security

commentary

(The National Interest)

Binary code and head profile, photo by Kristina Bolgert/Adobe Stock

Photo by Kristina Bolgert/Adobe Stock

by Christopher A. Mouton and Caleb Lucas

September 20, 2023

The rise of artificial intelligence and machine learning gives new meaning to the measure-countermeasure dynamic (PDF)—that is, the continuous evolution of defensive and offensive capabilities. The development of large language models, in particular, underscores the necessity of understanding and managing this dynamic.

Large language models, like GPT-4, can generate human-like text based on the input they receive. They are trained on vast quantities of data and can generate coherent and contextually relevant responses. Large language models hold great promise across a multitude of fields, including cybersecurity, healthcare, finance, and others. But as with any powerful tool, the models also pose challenges.

Bill Gates and other technology leaders have warned that AI is an existential risk and that “mitigating the risk of extinction from AI should be a global priority.” At the same time, Gates recently posted a blog post entitled, “The risks of AI are real but manageable.” Part of managing AI will depend on understanding the measure-countermeasure dynamic, which is central to the progression and governance of AI development but is also one of its least appreciated features.

Policymakers consistently face the challenge of rapid technological advancements and their associated threats, outpacing the creation of relevant policies and countermeasures. In the field of AI, emergent capability crises—ranging from deepfakes to potential existential risks—arise from the inherent unpredictability of technological development, influenced by geopolitical shifts and societal evolution. As a result, policy frameworks will almost always lag the state of technology.

The measure-countermeasure dynamic arises from this reality and calls for an approach we term “sequential robustness.” This approach is rooted in the paradoxical persistence of uncertainty, influenced by factors such as rapid technology development and geopolitical shifts. Unlike traditional policy approaches, sequential robustness acknowledges and accepts the transient nature of current circumstances. By adopting this perspective, policymakers can immediately address problems with existing policy solutions, examine challenges without current solutions, and continue to study emerging threats. While pursuing an ideal solution is commendable, policymakers must prioritize actionable steps. Perfection is unattainable, but prompt and informed action is an essential first step.…

The remainder of this commentary is available at nationalinterest.org.


Christopher Mouton is a senior engineer at the nonprofit, nonpartisan RAND Corporation and a professor at the Pardee RAND Graduate School. Caleb Lucas is an associate political scientist at RAND.

This commentary originally appeared on The National Interest on September 19, 2023. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.