Machine learning (ML) can have a significant impact on public policy by modeling complex relationships and augmenting human decisionmaking. However, overconfidence in results and incorrectly interpreted algorithms can lead to peril, such as the perpetuation of structural inequities. In this Perspective, the authors give an overview of ML and discuss the importance of its interpretability. In addition, they offer the following recommendations, which will help policymakers develop trustworthy, transparent, and accountable information that leads to more-objective and more-equitable policy decisions: (1) improve data through coordinated investments; (2) approach ML expecting interpretability, and be critical; and (3) leverage interpretable ML to understand policy values and predict policy impacts.
Peet, Evan D., Brian G. Vegetabile, Matthew Cefalu, Joseph D. Pane, and Cheryl L. Damberg, Machine Learning in Public Policy: The Perils and the Promise of Interpretability. Santa Monica, CA: RAND Corporation, 2022. https://www.rand.org/pubs/perspectives/PEA828-1.html.
Peet, Evan D., Brian G. Vegetabile, Matthew Cefalu, Joseph D. Pane, and Cheryl L. Damberg, Machine Learning in Public Policy: The Perils and the Promise of Interpretability, RAND Corporation, PE-A828-1, November 2022. As of November 15, 2022: https://www.rand.org/pubs/perspectives/PEA828-1.html