Cover: An Intelligence in Our Image

An Intelligence in Our Image

The Risks of Bias and Errors in Artificial Intelligence

Published Apr 5, 2017

by Osonde A. Osoba, William Welser IV

Download

Download eBook for Free

Full Document

FormatFile SizeNotes
PDF file 0.6 MB Best for desktop computers.

Use Adobe Acrobat Reader version 10 or higher for the best experience.

ePub file 1.7 MB Best for mobile devices.

On desktop computers and some mobile devices, you may need to download an eBook reader to view ePub files. Calibre is an example of a free and open source e-book library management application.

mobi file 3.6 MB Best for Kindle 1-3.

On desktop computers and some mobile devices, you may need to download an eBook reader to view mobi files. Amazon Kindle is the most popular reader for mobi files.

ذكاء اصطناعي بملامح بشرية: مخاطر التحيز والأخطاء في الذكاء الاصطناعي

Arabic language version

FormatFile SizeNotes
PDF file 0.4 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Purchase

Purchase Print Copy

 Format Price
Add to Cart Paperback44 pages $12.00

Research Questions

  1. In what ways do algorithms and artificial intelligence agents influence people's lives?
  2. What consequences and risks are associated with the pervasive use of algorithms?
  3. Are there policy implications to these risks?
  4. What can be done to mitigate these risks?

Machine learning algorithms and artificial intelligence systems influence many aspects of people's lives: news articles, movies to watch, people to spend time with, access to credit, and even the investment of capital. Algorithms have been empowered to make such decisions and take actions for the sake of efficiency and speed. Despite these gains, there are concerns about the rapid automation of jobs (even such jobs as journalism and radiology). A better understanding of attitudes toward and interactions with algorithms is essential precisely because of the aura of objectivity and infallibility cultures tend to ascribe to them. This report illustrates some of the shortcomings of algorithmic decisionmaking, identifies key themes around the problem of algorithmic errors and bias, and examines some approaches for combating these problems. This report highlights the added risks and complexities inherent in the use of algorithmic decisionmaking in public policy. The report ends with a survey of approaches for combating these problems.

Key Findings

Algorithms and Artificial Intelligence Agents Influence Many Areas of Life Today

  • In particular, these artificial agents influence the news articles read and associated advertising, access to credit and capital investment, risk assessments for convicts, and others.

This Reliance on Artificial Agents Carries Risks that Have Caused Concern

  • The potential for bias is one concern. Algorithms give the illusion of being unbiased but are written by people and trained on socially generated data. So they can encode and amplify human biases. Use of artificial agents in sentencing and other legal contexts is one area in particular that has caused concerns about bias.
  • Another concern is that increasing reliance on artificial agents is fueling the rapid automation of jobs, even jobs that would seem to rely heavily on human intelligence, such as journalism and radiology.
  • Among other risks are the possibility of hacked reward functions (an issue with machine learning) and the inability to distinguish among cultural differences.

Remedies Will Most Likely Require a Combination of Technical and Nontechnical Approaches

  • Reliance on algorithms for autonomous decisionmaking requires equipping them with means of auditing the causal factors behind decisions.
  • Algorithms can lead to inequitable outcomes. Instilling a healthy dose of informed skepticism in the public would help reduce the effects of automation bias.
  • Training and diversity in the ranks of algorithm developers could help improve sensitivity to potential disparate impact problems.

Recommendations

  • Identify critical services and subsystems that require "human-in-the-loop" decisionmaking. Selection criteria may include high-risk systems or systems that require special accountability. Limit the role of artificial agents in these systems to a strictly advisory capacity. Emphasize the need for the ability to audit the results of these advisory artificial agents.
  • Establish best practices for auditing algorithmic decisionmaking aids designed for use in government services and policy domains (e.g., the criminal justice system and social services administration). This should include specific guidance discouraging the use of unaccredited third-party black-box algorithmic solutions. Audit procedures should also address questions of disparate impact.
  • Adopt standardized disclosure practices to inform stakeholders when decisions affecting them are algorithmically generated. Institute standard procedures for appealing or reviewing such decisions.
  • Invest science research funds in research on algorithmic disparate impact. Engage with the commercial artificial intelligence community to share best practices.
  • Address diversity issues in the science, technology, engineering, and math educational pipeline. Update accreditation guidelines for engineering school to include more training on the effects of technology on society and sociotechnical systems more generally.

Research conducted by

This project is a RAND Venture. Funding for this venture was provided by philanthropic contributions from RAND supporters and income from operations. The research was conducted by the Center for Global Risk and Security (CGRS), part of International Programs at the RAND Corporation.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.