The rapid evolution of artificial intelligence (AI) technology offers immense opportunity to advance human welfare. However, this evolution also poses novel threats to humanity. Foundation models (FMs) are an AI trained on large datasets that show generalized competence across a wide variety of domains and tasks, such as answering questions, generating images or essays, and writing code. The generalized competence of FMs is the root of their great potential, both positive and negative. With proper training, FMs could be quickly deployed to enable the creation and use of chemical and biological weapons, exacerbate the synthetic drugs crisis, amplify disinformation campaigns that undermine democratic elections, and disrupt the financial system through stock market manipulation.
Reflecting these concerns, the RAND Corporation and the Carnegie Endowment for International Peace hosted a series of workshops in July 2023 with government and AI industry leaders to discuss developing security guardrails for FMs. Participants identified concerns about AI's impact on national security, potential policies to mitigate such risks, and key questions to inform future research and analysis.
Table of Contents
Short-Term Policy Actions