Identifying Systemic Bias in the Acquisition of Machine Learning Decision Aids for Law Enforcement Applications

by Douglas Yeung, Inez Khan, Nidhi Kalra, Osonde A. Osoba

Full Document

FormatFile SizeNotes
PDF file 0.2 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Biased software tools that use artificial intelligence (AI) and machine learning (ML) algorithms can exacerbate societal inequities. Ensuring equitability in the outcomes from such tools—in particular, those used by law enforcement agencies—is crucial.

Researchers from the Homeland Security Operational Analysis Center developed a notional acquisition framework of five steps at which ML bias concerns can emerge: acquisition planning; solicitation and selection; development; delivery; and deployment, maintenance, and sustainment. Bias can be introduced into the acquired system during development and deployment, but the other three steps can influence the extent to which, if any, that happens. Therefore, to eliminate harmful bias, efforts to address ML bias need to be integrated throughout the acquisition process.

As various U.S. Department of Homeland Security (DHS) components acquire technologies with AI capabilities, actions that the department could take to mitigate ML bias include establishing standards for measuring bias in law enforcement uses of ML; broadly accounting for all costs of biased outcomes; and developing and training law enforcement personnel in AI capabilities. More-general courses of action for mitigating ML bias include performance tracking and disaggregated evaluation, certification labels on ML resources, impact assessments, and continuous red-teaming.

This Perspective describes ways to identify and address bias in these systems.

Research conducted by

This research was conducted using internal funding generated from operations of the RAND Homeland Security Research Division (HSRD) and within the HSRD Acquisition and Development Program.

This report is part of the RAND Corporation perspective series. RAND perspectives present informed perspective on a timely topic that address the challenges facing the public and private sectors. All RAND perspectives undergo rigorous peer review to ensure high standards for research quality and objectivity.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.