Identifying Systemic Bias in the Acquisition of Machine Learning Decision Aids for Law Enforcement Applications
Download Free Electronic Document
Format | File Size | Notes |
---|---|---|
PDF file | 0.2 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Biased software tools that use artificial intelligence (AI) and machine learning (ML) algorithms can exacerbate societal inequities. Ensuring equitability in the outcomes from such tools—in particular, those used by law enforcement agencies—is crucial.
Researchers from the Homeland Security Operational Analysis Center developed a notional acquisition framework of five steps at which ML bias concerns can emerge: acquisition planning; solicitation and selection; development; delivery; and deployment, maintenance, and sustainment. Bias can be introduced into the acquired system during development and deployment, but the other three steps can influence the extent to which, if any, that happens. Therefore, to eliminate harmful bias, efforts to address ML bias need to be integrated throughout the acquisition process.
As various U.S. Department of Homeland Security (DHS) components acquire technologies with AI capabilities, actions that the department could take to mitigate ML bias include establishing standards for measuring bias in law enforcement uses of ML; broadly accounting for all costs of biased outcomes; and developing and training law enforcement personnel in AI capabilities. More-general courses of action for mitigating ML bias include performance tracking and disaggregated evaluation, certification labels on ML resources, impact assessments, and continuous red-teaming.
This Perspective describes ways to identify and address bias in these systems.
Research conducted by
This research was conducted using internal funding generated from operations of the RAND Homeland Security Research Division (HSRD) and within the HSRD Acquisition and Development Program.
This commentary is part of the RAND Corporation Expert insight series. RAND Expert Insights present perspectives on timely policy issues. All RAND Expert Insights undergo peer review to ensure high standards for quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.