Cover: Evaluating the Effectiveness of Artificial Intelligence Systems in Intelligence Analysis

Evaluating the Effectiveness of Artificial Intelligence Systems in Intelligence Analysis

Published Aug 26, 2021

by Daniel Ish, Jared Ettinger, Christopher Ferris


Download eBook for Free

FormatFile SizeNotes
PDF file 1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 Format Price
Add to Cart Paperback108 pages $23.00

Research Questions

  1. How are AI system measures of performance connected with effectiveness in intelligence analysis?
  2. How might AI be used to support the intelligence process, both as reflected in the development of real systems and in hypothetical systems that may not yet be in development?
  3. How can researchers model the intelligence process for the purposes of determining how AI systems situated in this process affect it?
  4. What metrics exist to characterize the performance of AI systems?

The U.S. military and intelligence community have shown interest in developing and deploying artificial intelligence (AI) systems to support intelligence analysis, both as an opportunity to leverage new technology and as a solution for an ever-proliferating data glut. However, deploying AI systems in a national security context requires the ability to measure how well those systems will perform in the context of their mission.

To address this issue, the authors begin by introducing a taxonomy of the roles that AI systems can play in supporting intelligence—namely, automated analysis, collection support, evaluation support, and information prioritization—and provide qualitative analyses of the drivers of the impact of system performance for each of these categories.

The authors then single out information prioritization systems, which direct intelligence analysts' attention to useful information and allow them to pass over information that is not useful to them, for quantitative analysis. Developing a simple mathematical model that captures the consequences of errors on the part of such systems, the authors show that their efficacy depends not just on the properties of the system but also on how the system is used. Through this exercise, the authors show how both the calculated impact of an AI system and the metrics used to predict it can be used to characterize the system's performance in a way that can help decisionmakers understand its actual value to the intelligence mission.

Key Findings

Using metrics not matched to actual priorities obscures system performance and impedes informed choice of the optimal system

  • Metric choice should take place before the system is built and be guided by attempts to estimate the real impact of system deployment.

Effectiveness, and therefore the metrics that measure it, can depend not just on system properties but also on how the system is used

  • A key consideration for decisionmakers is the amount of resources devoted to the mission outside those devoted to building the system.


  • Begin with the right metrics. This requires having a detailed understanding of the way an AI system will be used and choosing metrics that reflect success with respect to this utilization.
  • Reevaluate (and retune) regularly. Because the world around the system continues to evolve after deployment, system evaluation must continue as a portion of regular maintenance.
  • Speak the language. System designers have a well-established set of metrics for capturing the performance of AI systems, and being conversant in these traditional metrics will ease communication with experts during the process of designing a new system or maintaining an existing one.
  • Conduct further research into methods of evaluating AI system effectiveness.

This research was sponsored by the Office of the Secretary of Defense and conducted within the Cyber and Intelligence Policy Center of the RAND National Security Research Division (NSRD).

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.