- In what ways do algorithms and artificial intelligence agents influence people's lives?
- What consequences and risks are associated with the pervasive use of algorithms?
- Are there policy implications to these risks?
- What can be done to mitigate these risks?
Machine learning algorithms and artificial intelligence systems influence many aspects of people's lives: news articles, movies to watch, people to spend time with, access to credit, and even the investment of capital. Algorithms have been empowered to make such decisions and take actions for the sake of efficiency and speed. Despite these gains, there are concerns about the rapid automation of jobs (even such jobs as journalism and radiology). A better understanding of attitudes toward and interactions with algorithms is essential precisely because of the aura of objectivity and infallibility cultures tend to ascribe to them. This report illustrates some of the shortcomings of algorithmic decisionmaking, identifies key themes around the problem of algorithmic errors and bias, and examines some approaches for combating these problems. This report highlights the added risks and complexities inherent in the use of algorithmic decisionmaking in public policy. The report ends with a survey of approaches for combating these problems.
Algorithms and Artificial Intelligence Agents Influence Many Areas of Life Today
- In particular, these artificial agents influence the news articles read and associated advertising, access to credit and capital investment, risk assessments for convicts, and others.
This Reliance on Artificial Agents Carries Risks that Have Caused Concern
- The potential for bias is one concern. Algorithms give the illusion of being unbiased but are written by people and trained on socially generated data. So they can encode and amplify human biases. Use of artificial agents in sentencing and other legal contexts is one area in particular that has caused concerns about bias.
- Another concern is that increasing reliance on artificial agents is fueling the rapid automation of jobs, even jobs that would seem to rely heavily on human intelligence, such as journalism and radiology.
- Among other risks are the possibility of hacked reward functions (an issue with machine learning) and the inability to distinguish among cultural differences.
Remedies Will Most Likely Require a Combination of Technical and Nontechnical Approaches
- Reliance on algorithms for autonomous decisionmaking requires equipping them with means of auditing the causal factors behind decisions.
- Algorithms can lead to inequitable outcomes. Instilling a healthy dose of informed skepticism in the public would help reduce the effects of automation bias.
- Training and diversity in the ranks of algorithm developers could help improve sensitivity to potential disparate impact problems.
- Identify critical services and subsystems that require "human-in-the-loop" decisionmaking. Selection criteria may include high-risk systems or systems that require special accountability. Limit the role of artificial agents in these systems to a strictly advisory capacity. Emphasize the need for the ability to audit the results of these advisory artificial agents.
- Establish best practices for auditing algorithmic decisionmaking aids designed for use in government services and policy domains (e.g., the criminal justice system and social services administration). This should include specific guidance discouraging the use of unaccredited third-party black-box algorithmic solutions. Audit procedures should also address questions of disparate impact.
- Adopt standardized disclosure practices to inform stakeholders when decisions affecting them are algorithmically generated. Institute standard procedures for appealing or reviewing such decisions.
- Invest science research funds in research on algorithmic disparate impact. Engage with the commercial artificial intelligence community to share best practices.
- Address diversity issues in the science, technology, engineering, and math educational pipeline. Update accreditation guidelines for engineering school to include more training on the effects of technology on society and sociotechnical systems more generally.
This project is a RAND Venture. Funding for this venture was provided by philanthropic contributions from RAND supporters and income from operations. The research was conducted by the Center for Global Risk and Security (CGRS), part of International Programs at the RAND Corporation.
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.