Exploring Artificial Intelligence Use to Mitigate Potential Human Bias Within U.S. Army Intelligence Preparation of the Battlefield Processes

David Stebbins, Richard S. Girven, Timothy Parker, Thomas Deen, Brandon De Bruhl, James Ryseff, Jessica Welburn Paige, Annie Yu Kleiman, Sunny D. Bhatt, Éder M. Sousa, et al.

ResearchPublished Aug 6, 2024

U.S. policymakers require rapid, actionable, and objective intelligence information to mitigate or respond to global conflict. Intelligence preparation of the battlefield (IPB) is a critical staff process that provides the foundation for commanders and staff to achieve a thorough understanding of the operational environment. Artificial intelligence (AI) could play a significant role in mitigating potential cognitive biases within IPB by helping military planners understand the threat environment, evaluate threats and risks to mission, and determine alternative courses of action previously inaccessible to analysts because of various process challenges (e.g., complex intelligence information, process time constraints).

One key strength of AI use in IPB is its ability to analyze large amounts of data from various sources, including imagery, social media, and other open-source information that could corroborate classified intelligence products. Machine-assisted teaming can further leverage the strengths of AI while also accounting for the human decisionmaking process to ensure analytical objectivity and validity. Efficiencies gained through machine-assisted teaming would require (1) long-term Army investment commitments to AI research and development, (2) partnering with academia and private-sector industries to gain access to cutting-edge AI technology and expertise, and (3) prioritizing training for soldiers in AI operations and analysis.

AI-IPB integration could provide Army planning staff with a comprehensive understanding of operational environments and enhance decisionmaking by offering relevant mitigation strategies. In this report, the authors explore how AI might be used to mitigate potential human bias within Army IPB processes.

Key Findings

The IPB process has remained relatively unchanged for decades

  • While sources, methods, and requirements have evolved, the underlying premises supporting analytical functions have not been systematically reviewed to interrogate the potential for human bias.

A literature review and interviews revealed little consensus on what may constitute bias

  • While some definitions may contain similar factors (e.g., affinity bias versus similarity bias), the proliferation of definitions complicates the formulation of conclusive definitions that may affect the IPB community.

While the U.S. Department of Defense has demonstrated a willingness to consider generative AI as part of its overall efforts to gain information and decision advantage, greater AI use among IPB stakeholders will likely require deliberate change management techniques to help the community understand what AI can offer

  • Many interviewees were concerned about generative AI platform use because of the “black box” nature of underlying data.
  • Others suggested that cultural norms within the military (and Intelligence Community) may also contribute to a general unwillingness to use such platforms, given operational security considerations.

Existing IPB processes and structures may inhibit policy option selection in the course-of-action development period

  • While IPB structures can help to ensure a valid, reliable, and repeatable process among stakeholders, there may be little opportunity (e.g., time) to generate additional options for commanders to address complex national security issues, since existing courses of action represent a consolidated view of previous judgments.

Recommendations

  • Use the suggested framework in this report to drive future research efforts that are focused on the impact of AI on existing intelligence tradecraft, analytical techniques, and associated processes.
  • Develop a series of additional machine-assisted exercises that can draw from classified data to explore additional AI-added value to real-world scenarios.
  • Conduct a formal review of Army IPB training and existing practice to identify whether there are any self-correcting measures in place to mitigate potential bias.
  • Consider existing areas within IPB that may readily lend themselves to automated collection and analysis, and pilot-test IPB utility.
  • Conduct working groups to explore potential IPB process modernization.
  • Develop an informal (internal) survey or interview instrument that can assist IPB managers in identifying the types of potential cognitive biases that could affect existing IPB practice.
  • Showcase AI's ability (through pilot testing) to augment IPB decisionmaking.
  • Develop an Army AI data oversight policy that considers supply-chain integrity, risk, responsible use, and other ethical considerations that must be in place prior to scaling AI operations.
  • Consider releasing select (unclassified or declassified) historical IPB records and assessments to open-source research communities to allow experienced data scientists and other researchers with bias expertise opportunities to identify additional challenges and enablers.
  • Consider introducing Intelligence Community–developed structured analytic technique rule-sets to AI platforms to explore whether AI can offer the a platform for analytic self-correction per this report.
  • Conduct a retrospective study examining courses of action that were considered but not selected by IPB managers.

Topics

Document Details

Citation

RAND Style Manual
Stebbins, David, Richard S. Girven, Timothy Parker, Thomas Deen, Brandon De Bruhl, James Ryseff, Jessica Welburn Paige, Annie Yu Kleiman, Sunny D. Bhatt, Éder M. Sousa, Marta Kepe, and Matthew Fay, Exploring Artificial Intelligence Use to Mitigate Potential Human Bias Within U.S. Army Intelligence Preparation of the Battlefield Processes, RAND Corporation, RR-A2763-1, 2024. As of September 11, 2024: https://www.rand.org/pubs/research_reports/RRA2763-1.html
Chicago Manual of Style
Stebbins, David, Richard S. Girven, Timothy Parker, Thomas Deen, Brandon De Bruhl, James Ryseff, Jessica Welburn Paige, Annie Yu Kleiman, Sunny D. Bhatt, Éder M. Sousa, Marta Kepe, and Matthew Fay, Exploring Artificial Intelligence Use to Mitigate Potential Human Bias Within U.S. Army Intelligence Preparation of the Battlefield Processes. Santa Monica, CA: RAND Corporation, 2024. https://www.rand.org/pubs/research_reports/RRA2763-1.html.
BibTeX RIS

Funding for this research was made possible by the independent research and development provisions of RAND's contracts for the operation of its U.S. Department of Defense federally funded research and development centers.

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.