Current Artificial Intelligence Does Not Meaningfully Increase Risk of a Biological Weapons Attack
January 25, 2024
The current generation of large language models (LLMs)—a form of artificial intelligence (AI)—do not increase the risk of a biological weapons attack by a non-state actor, according to new RAND research assessing the potential risks posed by the misuse of AI.
While LLMs can generate troubling text associated with biological weapons, the report finds that the use of LLMs did not measurably change the operational risk of a biological attack as their outputs generally mirrored information readily available on the internet.
The report notes that risks may exist, but during a red-team exercise using experts to emulate malicious non-state actors, researchers did not find any statistically significant differences in the viability of biological weapons attack plans generated with or without LLM assistance. This suggests that the tasks involved in planning such an attack fall outside the capabilities of the LLMs examined.
“Just because today's LLMs aren't able to close the knowledge gap needed to facilitate biological weapons attack planning doesn't preclude the possibility that they may be able to in the future,” said Christopher Mouton, lead author and a senior engineer at RAND. “This is worth continuing to study because AI technology is available to everyone—including dangerous non-state actors—and it's advancing faster than governments can keep pace.”
Because LLMs are increasingly capable and available, it's important to monitor their evolution to ensure they are safe and secure from potential misuse, according to the report. Accurate risk assessment models, such as the methodology developed for this research, can be used to help evaluate these technologies and inform the discussion of effective regulatory frameworks.
The report, “The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study,” is a follow-up study to an earlier methodology report published in October 2023. Funding for both reports was provided by gifts from RAND supporters and income from operations and was conducted by the new RAND Center for Global and Emerging Risks. Other authors are Caleb Lucas and Ella Guest.