As frontier artificial intelligence (AI) models become more capable, protecting them from malicious actors will become more important. If AI systems rapidly become more capable over the next few years, achieving sufficient security will require investments — starting today — well beyond what the default trajectory appears to be. This working paper suggests steps that can be taken now to avoid future problems.
Table of Contents
Overview of the Interim and Full Reports
Context and Motivation
Attack Vectors — Highlights
Risk Estimates for Attack Vectors — Highlights
Security Levels — Highlights
Funding for this research was provided by gifts from RAND supporters. The research was conducted by the Acquisition and Technology Policy Program within the RAND National Security Research Division.
This report is part of the RAND Corporation Working paper series. RAND working papers are intended to share researchers' latest findings and to solicit informal peer review. They have been approved for circulation by RAND but may not have been formally edited or peer reviewed.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.