U.S. Tort Liability for Large-Scale Artificial Intelligence Damages
A Primer for Developers and Policymakers
ResearchPublished Aug 21, 2024
In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of artificial intelligence (AI) developers whose models inflict, or are used to inflict, large-scale harm. The report is intended to be useful to AI developers, policymakers, and other nonlegal audiences who wish to understand the liability exposure that AI development may entail and how this exposure might be mitigated.
A Primer for Developers and Policymakers
ResearchPublished Aug 21, 2024
Leading artificial intelligence (AI) developers and researchers, as well as government officials and policymakers, are investigating the harms that advanced AI systems might cause. In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of AI developers whose models inflict, or are used to inflict, large-scale harm.
Highly capable AI systems are a growing presence in widely used consumer products, industrial and military enterprise, and critical societal infrastructure. Such systems may soon become a significant presence in tort cases as well—especially if their ability to engage in autonomous or semi-autonomous behavior, or their potential for harmful misuse, grows over the coming years.
The authors find that AI developers face considerable liability exposure under U.S. tort law for harms caused by their models, particularly if those models are developed or released without utilizing rigorous safety procedures and industry-leading safety practices.
At the same time, however, developers can mitigate their exposure by taking rigorous precautions and heightened care in developing, storing, and releasing advanced AI systems. By taking due care, developers can reduce the risk that their activities will cause harm to other people and reduce the risk that they will be held liable if their activities do cause such harm.
The report is intended to be useful to AI developers, policymakers, and other nonlegal audiences who wish to understand the liability exposure that AI development may entail and how this exposure might be mitigated.
Funding for this research was provided by gifts from RAND supporters and income from operations and conducted by RAND Global and Emerging Risks.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.