U.S. Tort Liability for Large-Scale Artificial Intelligence Damages

A Primer for Developers and Policymakers

Ketan Ramakrishnan, Gregory Smith, Conor Downey

ResearchPublished Aug 21, 2024

Leading artificial intelligence (AI) developers and researchers, as well as government officials and policymakers, are investigating the harms that advanced AI systems might cause. In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of AI developers whose models inflict, or are used to inflict, large-scale harm.

Highly capable AI systems are a growing presence in widely used consumer products, industrial and military enterprise, and critical societal infrastructure. Such systems may soon become a significant presence in tort cases as well—especially if their ability to engage in autonomous or semi-autonomous behavior, or their potential for harmful misuse, grows over the coming years.

The authors find that AI developers face considerable liability exposure under U.S. tort law for harms caused by their models, particularly if those models are developed or released without utilizing rigorous safety procedures and industry-leading safety practices.

At the same time, however, developers can mitigate their exposure by taking rigorous precautions and heightened care in developing, storing, and releasing advanced AI systems. By taking due care, developers can reduce the risk that their activities will cause harm to other people and reduce the risk that they will be held liable if their activities do cause such harm.

The report is intended to be useful to AI developers, policymakers, and other nonlegal audiences who wish to understand the liability exposure that AI development may entail and how this exposure might be mitigated.

Key Findings

  • Tort law is a significant source of legal risk for developers that do not take adequate precautions to guard against causing harm when developing, storing, testing, or deploying advanced AI systems.
  • There is substantial uncertainty, in important respects, about how existing tort doctrine will be applied to AI development. Jurisdictional variation and uncertainty about how legal standards will be interpreted and applied may generate substantial liability risk and costly legal battles for AI developers.
  • AI developers that do not employ industry-leading safety practices, such as rigorous red-teaming and safety testing or the installation of robust safeguards against misuse, among others, may substantially increase their liability exposure.
  • While developers face significant liability exposure from the risk that third parties will misuse their models, there is considerable uncertainty about how this issue will be treated in the courts, and different states may take markedly different approaches.
  • Safety-focused policymakers, developers, and advocates can strengthen AI developers' incentives to employ cutting-edge safety techniques by developing, implementing, and publicizing new safety procedures and by formally promulgating these standards and procedures through industry bodies.
  • Policymakers may wish to clarify or modify liability standards for AI developers and/or develop complementary regulatory standards for AI development.

Order a Print Copy

Format
Paperback
Page count
65 pages
List Price
$27.00
Buy link
Add to Cart

Topics

Document Details

  • Availability: Available
  • Year: 2024
  • Print Format: Paperback
  • Paperback Pages: 65
  • Paperback Price: $27.00
  • Paperback ISBN/EAN: 1-9774-1339-0
  • DOI: https://doi.org/10.7249/RRA3084-1
  • Document Number: RR-A3084-1

Citation

RAND Style Manual
Ramakrishnan, Ketan, Gregory Smith, and Conor Downey, U.S. Tort Liability for Large-Scale Artificial Intelligence Damages: A Primer for Developers and Policymakers, RAND Corporation, RR-A3084-1, 2024. As of September 4, 2024: https://www.rand.org/pubs/research_reports/RRA3084-1.html
Chicago Manual of Style
Ramakrishnan, Ketan, Gregory Smith, and Conor Downey, U.S. Tort Liability for Large-Scale Artificial Intelligence Damages: A Primer for Developers and Policymakers. Santa Monica, CA: RAND Corporation, 2024. https://www.rand.org/pubs/research_reports/RRA3084-1.html. Also available in print form.
BibTeX RIS

Research conducted by

Funding for this research was provided by gifts from RAND supporters and income from operations and conducted by RAND Global and Emerging Risks.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.