Assessing and Suing an Algorithm

Perceptions of Algorithmic Decisionmaking

Elina Treyger, Jirka Taylor, Daniel Kim, Maynard A. Holliday

ResearchPublished Oct 12, 2023

Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers' understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment.

Key Findings

  • There was evidence for an algorithmic penalty, or a tendency to judge algorithms more harshly than humans for otherwise identical decisions related to employment or unemployment.
  • Respondents were more likely to perceive algorithmic decisionmaking as unfair, error-prone, and non-transparent than they were to perceive human decisionmaking as such. By contrast, there were no consistent differences in perceptions of bias.
  • The investigation of differences in views between minority (i.e., Hispanic and/or non-White) and majority (i.e., non-Hispanic White) respondents produced results that were not straightforward. Majority respondents penalized algorithms more heavily than minority respondents did in their assessments of algorithmic fairness, accuracy, and transparency. This was not the case for bias, where the differences between minority and majority respondents were reversed and very small.
  • Greater exposure to algorithmic decisionmaking corresponds to greater skepticism about the future and possibilities of algorithmic processes.
  • There was little evidence that people would be discouraged from seeking to hold relevant parties accountable for problematic decisions made by algorithms. To the extent that differences existed, respondents were slightly more likely to resort to legal processes when the problematic decision was made by an algorithm than when it was made by a human.

Topics

Document Details

Citation

RAND Style Manual
Treyger, Elina, Jirka Taylor, Daniel Kim, and Maynard A. Holliday, Assessing and Suing an Algorithm: Perceptions of Algorithmic Decisionmaking, RAND Corporation, RR-A2100-1, 2023. As of September 11, 2024: https://www.rand.org/pubs/research_reports/RRA2100-1.html
Chicago Manual of Style
Treyger, Elina, Jirka Taylor, Daniel Kim, and Maynard A. Holliday, Assessing and Suing an Algorithm: Perceptions of Algorithmic Decisionmaking. Santa Monica, CA: RAND Corporation, 2023. https://www.rand.org/pubs/research_reports/RRA2100-1.html.
BibTeX RIS

Research conducted by

Funding for this research was provided by gifts from RAND supporters and income from the operations. The research described in this report was conducted by the RAND Institute for Civil Justice within the Justice Policy Program of RAND Social and Economic Well-Being.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.