The Operational Risks of AI in Large-Scale Biological Attacks

A Red-Team Approach

Christopher A. Mouton, Caleb Lucas, Ella Guest

ResearchPublished Oct 16, 2023

The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including its potential to be applied in the development of advanced biological weapons. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations. Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.

The authors of this report look at the emerging issue of identifying and mitigating the risks posed by the misuse of AI—specifically, large language models (LLMs)—in the context of biological attacks. They present preliminary findings of their research and examine future paths for that research as AI and LLMs gain sophistication and speed.

Key Findings

  • In experiments to date, LLMs have not generated explicit instructions for creating biological weapons. However, LLMs did offer guidance that could assist in the planning and execution of a biological attack.
  • In a fictional plague pandemic scenario, the LLM discussed biological weapon–induced pandemics, identifying potential agents, and considering budget and success factors. The LLM assessed the practical aspects of obtaining and distributing Yersinia pestis–infected specimens while identifying the variables that could affect the projected death toll.
  • In another fictional scenario, the LLM discussed foodborne and aerosol delivery methods of botulinum toxin, noting risks and expertise requirements. The LLM suggested aerosol devices as a method and proposed a cover story for acquiring Clostridium botulinum while appearing to conduct legitimate research.
  • These initial findings do not yet provide a full understanding of the real-world operational impact of LLMs on bioweapon attack planning. Ongoing research aims to assess what these outputs mean operationally for enabling nonstate actors. The final report on this research will clarify whether LLM-generated text enhances the potential effectiveness and likelihood of a malicious actor causing widespread harm or is similar to the existing level of risk posed by harmful information already accessible on the internet.

Topics

Document Details

Citation

RAND Style Manual
Mouton, Christopher A., Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large-Scale Biological Attacks: A Red-Team Approach, RAND Corporation, RR-A2977-1, 2023. As of October 15, 2024: https://www.rand.org/pubs/research_reports/RRA2977-1.html
Chicago Manual of Style
Mouton, Christopher A., Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large-Scale Biological Attacks: A Red-Team Approach. Santa Monica, CA: RAND Corporation, 2023. https://www.rand.org/pubs/research_reports/RRA2977-1.html.
BibTeX RIS

Funding for this research was provided by gifts from RAND supporters and income from operations.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.