Generative Artificial Intelligence Threats to Information Integrity and Potential Policy Responses
Expert InsightsPublished Apr 16, 2024
Expert InsightsPublished Apr 16, 2024
This paper highlights the ecosystem of generative artificial intelligence (AI) threats to information integrity and democracy and the potential policy responses to mitigate the nexus of those evolving threats. The authors focus on the information environment and how generative AI—such as large language models or AI-generated images and audio—is able to accelerate existing harms on the internet and beyond. The policy options that could address these complex problems are vast, varying from much-needed social media reforms to using federal agencies to create sweeping standards for AI-generated content. The authors provide an overview of the risks that generative AI presents to democratic systems, as well as tangible and detailed whole-of-government and societal solutions to mitigate these risks at scale.
Funding for this work was provided by gifts from RAND supporters and income from operations and conducted within the Technology and Security Policy Center of RAND Global and Emerging Risks.
This publication is part of the RAND expert insights series. The expert insights series presents perspectives on timely policy issues.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.