Industry and Government Collaboration on Security Guardrails for AI Systems
Summary of the AI Safety and Security Workshops
Download Free Electronic Document
Format | File Size | Notes |
---|---|---|
PDF file | 0.2 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
The rapid evolution of artificial intelligence (AI) technology offers immense opportunity to advance human welfare. However, this evolution also poses novel threats to humanity. Foundation models (FMs) are an AI trained on large datasets that show generalized competence across a wide variety of domains and tasks, such as answering questions, generating images or essays, and writing code. The generalized competence of FMs is the root of their great potential, both positive and negative. With proper training, FMs could be quickly deployed to enable the creation and use of chemical and biological weapons, exacerbate the synthetic drugs crisis, amplify disinformation campaigns that undermine democratic elections, and disrupt the financial system through stock market manipulation.
Reflecting these concerns, the RAND Corporation and the Carnegie Endowment for International Peace hosted a series of workshops in July 2023 with government and AI industry leaders to discuss developing security guardrails for FMs. Participants identified concerns about AI's impact on national security, potential policies to mitigate such risks, and key questions to inform future research and analysis.
Table of Contents
Chapter One
Introduction
Chapter Two
Workshop Insights
Chapter Three
Short-Term Policy Actions
Chapter Four
Conclusion
Research conducted by
The research reported here was prepared for the Office of the Secretary of Defense and conducted within the International Security and Defense Policy Program of the RAND National Security Research Division (NSRD).
This report is part of the RAND Corporation Conference proceeding series. RAND conference proceedings present a collection of papers delivered at a conference or a summary of the conference.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.