U.S. air superiority is being challenged by competitors. The authors of this report demonstrate a prototype of a proof-of-concept artificial intelligence system to help develop and evaluate new concepts of operations for the air domain. The initial findings highlight both the potential of reinforcement learning to tackle complex, collaborative air mission planning problems, and some significant challenges facing this approach.
Air Dominance Through Machine Learning
A Preliminary Exploration of Artificial Intelligence–Assisted Mission Planning
Download
Download eBook for Free
Format | File Size | Notes |
---|---|---|
PDF file | 1.2 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Purchase
Purchase Print Copy
Format | List Price | Price | |
---|---|---|---|
Add to Cart | Paperback70 pages | $25.00 | $20.00 20% Web Discount |
Research Questions
- Could contemporary ML agents be trained to effectively exhibit intelligent mission-planning behaviors without requiring training data on billions of possible combinations of situations?
- Could machine agents learn strategies against surface-to-air missiles (SAMs) using combinations of striker, jamming, and decoy aircraft? Jammers need to get close enough to SAMs to affect them but remain far enough away that they do not get shot down. Decoys need to distract a SAM from a striker at the right time.
- Could sufficiently generalizable representations be built to capture the richness of the planning problem? Would the lessons learned generalize across changes in threat location, type, and number?
U.S. air superiority, a cornerstone of U.S. deterrence efforts, is being challenged by competitors—most notably, China. The spread of machine learning (ML) is only enhancing that threat. One potential approach to combat this challenge is to more effectively use automation to enable new approaches to mission planning.
The authors of this report demonstrate a prototype of a proof-of-concept artificial intelligence (AI) system to help develop and evaluate new concepts of operations for the air domain. The prototype platform integrates open-source deep learning frameworks, contemporary algorithms, and the Advanced Framework for Simulation, Integration, and Modeling—a U.S. Department of Defense–standard combat simulation tool. The goal is to exploit AI systems' ability to learn through replay at scale, generalize from experience, and improve over repetitions to accelerate and enrich operational concept development.
In this report, the authors discuss collaborative behavior orchestrated by AI agents in highly simplified versions of suppression of enemy air defenses missions. The initial findings highlight both the potential of reinforcement learning (RL) to tackle complex, collaborative air mission planning problems, and some significant challenges facing this approach.
Key Findings
RL can tackle complex planning problems but still has limitations, and there are still challenges to this approach
- Pure RL algorithms can be inefficient and prone to learning collapse.
- Proximal policy optimization is a recent step in the right direction for addressing the learning collapse issue: It has built-in constraints preventing the network parameters from changing too much in each iteration.
- ML agents are capable of learning cooperative strategies. In simulations, the strike aircraft synergized with jammer or decoy effects on a SAM.
- Trained algorithms should be able to deal with changes in mission parameters (number and locations of assets) fairly easily.
- Few real-world data exist on successful and unsuccessful missions. Compared with the volumes of data used to train contemporary ML systems, very few real missions have been flown against air defenses, and virtually all of them were successful.
- For analyses involving the use of large simulations in place of large datasets, the required computational burden will continue to be a significant challenge. The scaling of computational power and time required to train realistic sets of capabilities (dozens of platforms) against realistic threats (dozens of SAMs) remains unclear.
- Developing trust in AI algorithms will require more-exhaustive testing and fundamental advances in algorithm verifiability, and safety and boundary assurances.
Recommendations
- Future work on automated mission planning should focus on developing robust multiagent algorithms. Reward functions in RL problems can drastically change AI behavior in often unexpected ways. Care must be taken in designing such functions to accurately capture risk and intent.
- Although simulation environments are crucial in data-scarce problems, simulations should be tuned to balance speed (lower computational requirements) versus accuracy (real-world transferability).
Table of Contents
Chapter One
Introduction
Chapter Two
One-Dimensional Problem
Chapter Three
Two-Dimensional Problem
Chapter Four
Computational Infrastructure
Chapter Five
Conclusions
Appendix A
2-D Problem State Vector Normalization
Appendix B
Containerization and ML Infrastructure
Appendix C
Managing Agent-Simulation Interaction in the 2-D Problem
Appendix D
Overview of Learning Algorithms
Research conducted by
Funding for this independent research was provided by gifts from RAND supporters and income from operations. The research was conducted within RAND Project AIR FORCE.
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.