Cover: Assessing and Suing an Algorithm

Assessing and Suing an Algorithm

Perceptions of Algorithmic Decisionmaking

Published Oct 12, 2023

by Elina Treyger, Jirka Taylor, Daniel Kim, Maynard A. Holliday

Download eBook for Free

FormatFile SizeNotes
PDF file 0.6 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Questions

  1. How do public perceptions of algorithmic decisionmaking in such domains as employment and unemployment compare with perceptions of traditional human decisionmaking?
  2. What kinds of judgments about the shortcomings of alternative decisionmaking processes underlie these perceptions?
  3. Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?
  4. How will encounters with negative outcomes produced by algorithmic decisionmaking shape people's assessments of these technologies?

Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers' understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment.

Key Findings

  • There was evidence for an algorithmic penalty, or a tendency to judge algorithms more harshly than humans for otherwise identical decisions related to employment or unemployment.
  • Respondents were more likely to perceive algorithmic decisionmaking as unfair, error-prone, and non-transparent than they were to perceive human decisionmaking as such. By contrast, there were no consistent differences in perceptions of bias.
  • The investigation of differences in views between minority (i.e., Hispanic and/or non-White) and majority (i.e., non-Hispanic White) respondents produced results that were not straightforward. Majority respondents penalized algorithms more heavily than minority respondents did in their assessments of algorithmic fairness, accuracy, and transparency. This was not the case for bias, where the differences between minority and majority respondents were reversed and very small.
  • Greater exposure to algorithmic decisionmaking corresponds to greater skepticism about the future and possibilities of algorithmic processes.
  • There was little evidence that people would be discouraged from seeking to hold relevant parties accountable for problematic decisions made by algorithms. To the extent that differences existed, respondents were slightly more likely to resort to legal processes when the problematic decision was made by an algorithm than when it was made by a human.

Research conducted by

Funding for this research was provided by gifts from RAND supporters and income from the operations. The research described in this report was conducted by the RAND Institute for Civil Justice within the Justice Policy Program of RAND Social and Economic Well-Being.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.