Download eBook for Free

FormatFile SizeNotes
PDF file 1.2 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 Format Price
Add to Cart Paperback100 pages $23.00

Research Questions

  1. What do algorithmic equity challenges look like in different applications?
  2. What are the criteria for assessing algorithmic equity challenges?
  3. What mechanisms exist for correcting sustained algorithmic equity challenges, and how do existing legal and social norms apply?
  4. Are new mechanisms or other responses necessary?

Social institutions — such as markets, social media platforms, criminal justice systems, and employment processes — increasingly leverage algorithms for decisionmaking purposes. This report examines potential pathologies in institutional use of algorithmic decisionmaking tools. The primary focus of this report is to understand how to evaluate the equitable use of algorithms across a range of specific applications. The report outlines concepts of equity from philosophical, legal, and technical traditions to draw insights that apply across algorithmic decisionmaking contexts. The researchers develop a framework for examining algorithmic decisionmaking and work through three domain explorations (auto insurance, job recruitment, and criminal justice). In addition, the work contains a deep dive into an algorithm audit of a part of the North Carolina criminal justice system. The work ends with overall insights and recommendations of practical mechanisms for algorithmic governance. The subject of the report is important because unaddressed equity challenges can undermine the stability and legitimacy of social institutions and lead to severe adverse impacts on affected people.

Key Findings

Fairness, or equity, is a complex and very domain-specific concept

  • Equity is a contested concept, and it will require domain-specific examination.
  • Prohibiting the use of sensitive attributes is less effective, given advanced algorithms and large secondary data sets.
  • The ground truth accuracy criteria may be inadequate.
  • Algorithmic transparency is important, but it is not a panacea.
  • Implementation practices matter. Implementation and institutional factors can both improve or reduce fairness, regardless of the fairness of the algorithms themselves.
  • There are useful technical and regulatory interventions that can address or reduce equity challenges in algorithmic decisionmaking.


  • Particular, domain-specific concepts of equity should be clear to stakeholders.
  • Rely less on sensitive attribute designation for assuring equity.
  • Ground truth accuracy may not always be a neutral success condition for judging decisionmaking models.
  • Signals of trustworthiness — opening the "black box" — have to be appropriate to the institutional context and relevant stakeholders.
  • Institutions deploying algorithms should put procedures in place to monitor or evaluate the response to algorithmic decisionmaking artifacts, not just the performance of these artifacts.
  • Designers and deployers of algorithms may benefit from adopting an algorithmic equity checklist approach to minimize undesirable equity outcomes.

Research conducted by

This project is a RAND Venture. Funding was provided by gifts from RAND supporters and income from operations. The research was conducted by the Community Health and Environmental Policy Program within RAND Social and Economic Well-Being.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.