A new risk index to monitor AI-powered biological tools
Photo by Photon_Photo/Adobe Stock
What is the issue?
Artificial intelligence (AI) holds promise and peril for biological research. It can speed up the development of new treatments and vaccines, but it may also make it easier to develop dangerous bioweapons. To manage this risk, governments need to monitor how biological tools will evolve over time. However, current methods to assess the risks of AI-enabled technologies are not specifically focused on the wide-ranging and rapidly evolving capabilities of biological tools. This includes the potential to design new viruses, automate wet lab experiments and predict protein structures. There is a need to develop a comprehensive method to capture the unique risks posed by these tools. By developing an early warning system for dangerous capabilities, this could support governments in deciding whether mitigations are needed at a given point in time to reduce the ceiling of harm; and provide developers with tools to innovate safely.
How are we helping?
The Centre for Long-Term Resilience (CLTR) and RAND Europe are collaborating to develop a new approach to assess the risks posed by AI-enabled biological tools. We will develop a risk index that assesses current risks from tool capability, tool evolution and maturity, to enablement of bioweaponisation that could result in biosecurity risks. Our index also takes into account how these risks may change over time in this rapidly evolving area. Our work will expand upon and update previous work conducted by CLTR on tool categorisation (Rose & Nelson 2023) and risk assessment (Moulange et al. 2024).
We will consult expert stakeholders in government, academia and industry on our proposed tool categories, risk thresholds and tool evolution. We will then develop a rubric for risk assessment, which may be expanded to capture complex and multimodal biological tools. We will also explore how information on evolving risks could be best communicated to policymakers to guide decisions.
Based upon a refined methodology, we will conduct the risk assessment on the most prevalent and nascent tools that are representative or at the cutting edge of the various functional categories we have developed through bibliometric analysis and expert consultations.
We also aim to assess the feasibility of automating the extraction and analysis of the latest literature on AI-enabled biological tools by using open-source bibliographic data and the advanced processing capabilities of large language models. This could make the risk assessment efficient and reproducible, allowing repeat assessments in order to track how risks evolve over time.
Why is this important?
Governments' ability to reliably, repeatedly and comprehensively assess the capabilities and characteristics of these tools—and evaluate their potentially dangerous applications—provides the foundation for appropriately managing risk. US and European policymakers and industry stakeholders working at the intersection of AI and biology will be the intended audience for this work. It will be critical in aiding this audience in becoming well versed on the current state of play of risks from AI-enabled biological tools, and in supporting industry and academics to responsibly innovate while building in safeguards for risk mitigation.
This project is being conducted in collaboration with the Centre for Long-Term Resilience (CLTR).