In this report, the authors address the emerging issue of identifying and mitigating the risks posed by the misuse of artificial intelligence (AI)—specifically, large language models—in the context of biological attacks and present preliminary findings of their research. They find that while AI can generate concerning text, the operational impact is a subject for future research.
- How can AI—and, more specifically, LLMs—be misused in the context of biological attacks?
- What are future avenues for research on AI and LLM misuse in this context?
The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including its potential to be applied in the development of advanced biological weapons. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations. Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.
The authors of this report look at the emerging issue of identifying and mitigating the risks posed by the misuse of AI—specifically, large language models (LLMs)—in the context of biological attacks. They present preliminary findings of their research and examine future paths for that research as AI and LLMs gain sophistication and speed.
- In experiments to date, LLMs have not generated explicit instructions for creating biological weapons. However, LLMs did offer guidance that could assist in the planning and execution of a biological attack.
- In a fictional plague pandemic scenario, the LLM discussed biological weapon–induced pandemics, identifying potential agents, and considering budget and success factors. The LLM assessed the practical aspects of obtaining and distributing Yersinia pestis–infected specimens while identifying the variables that could affect the projected death toll.
- In another fictional scenario, the LLM discussed foodborne and aerosol delivery methods of botulinum toxin, noting risks and expertise requirements. The LLM suggested aerosol devices as a method and proposed a cover story for acquiring Clostridium botulinum while appearing to conduct legitimate research.
- These initial findings do not yet provide a full understanding of the real-world operational impact of LLMs on bioweapon attack planning. Ongoing research aims to assess what these outputs mean operationally for enabling nonstate actors. The final report on this research will clarify whether LLM-generated text enhances the potential effectiveness and likelihood of a malicious actor causing widespread harm or is similar to the existing level of risk posed by harmful information already accessible on the internet.