As the use of artificial intelligence (AI) algorithms in decisionmaking spreads, public perceptions will have many implications — including for jury judgments about algorithmic liability and support for AI regulation. This report describes a survey experiment that explored such perceptions in the context of employment.
- How do public perceptions of algorithmic decisionmaking in such domains as employment and unemployment compare with perceptions of traditional human decisionmaking?
- What kinds of judgments about the shortcomings of alternative decisionmaking processes underlie these perceptions?
- Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?
- How will encounters with negative outcomes produced by algorithmic decisionmaking shape people's assessments of these technologies?
Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?
Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers' understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment.
- There was evidence for an algorithmic penalty, or a tendency to judge algorithms more harshly than humans for otherwise identical decisions related to employment or unemployment.
- Respondents were more likely to perceive algorithmic decisionmaking as unfair, error-prone, and non-transparent than they were to perceive human decisionmaking as such. By contrast, there were no consistent differences in perceptions of bias.
- The investigation of differences in views between minority (i.e., Hispanic and/or non-White) and majority (i.e., non-Hispanic White) respondents produced results that were not straightforward. Majority respondents penalized algorithms more heavily than minority respondents did in their assessments of algorithmic fairness, accuracy, and transparency. This was not the case for bias, where the differences between minority and majority respondents were reversed and very small.
- Greater exposure to algorithmic decisionmaking corresponds to greater skepticism about the future and possibilities of algorithmic processes.
- There was little evidence that people would be discouraged from seeking to hold relevant parties accountable for problematic decisions made by algorithms. To the extent that differences existed, respondents were slightly more likely to resort to legal processes when the problematic decision was made by an algorithm than when it was made by a human.
Table of Contents
Algorithmic Decisionmaking and Public Perceptions
Survey Experiment Results
Conclusions and Implications
Methods and Supplemental Tables