This report documents an interface between an agent-based force-on-force simulation and a network simulation for the U.S. Army's Cyber Center of Excellence, Capabilities Development and Integration Directorate (CDID), Cyber Battle Lab. This report summarizes that effort, should be useful as a guide to users of the interface, and assumes readers have familiarity with simulations and computer interfaces.
Download eBook for Free
Format | File Size | Notes |
---|---|---|
PDF file | 0.9 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Research Questions
- What kind of interface can be developed in order to give the Army an additional capability that complements existing modeling and simulation capabilities?
- How can MANA's capability be enhanced to meet experimentation needs of the Army?
This report documents an interface between an agent-based force-on-force simulation and a network simulation for the U.S. Army's Cyber Center of Excellence, Capabilities Development and Integration Directorate, Cyber Battle Lab. One of the critical functions executed by the Battle Lab is to provide modeling and simulation (M&S) support to validate current and future command, control, communication, and network–related concepts, technologies, and architectures. This interface was designed to give the Army an additional capability — complementing existing M&S capabilities — to study the operational impact of current and future tactical networks. It also provides a quick-turn analysis capability to support experimentation and exercises facilitated by the Experimentation Division.
This report successfully interfaced the two simulation tools, an agent-based force-on-force simulation called the Map Aware Non-Uniform Automata (MANA) and a high-resolution communication simulation called the Joint Network Emulator (JNE).
MANA's abstract, agent-based design minimizes the cost of developing scenarios by lowering fidelity. It enables a wide range of operational outcomes with minimal input from the modeler. When interfaced to JNE, the pair create a powerful exploratory tool. This type of modeling has a broad range of applications, depending on the research questions.
The Army should use this capability to explore operational impact of the network questions but also continue to expand its toolkit for M&S to support analysis by exploring other agent-based and network simulation tools.
Key Findings
Five Different Test Scenarios Explored Simulation Performance Times
- In the performance scenario, the interface did not experience any anomalies and is thus deemed reasonably robust to errors during high-intensity runs.
- The verification scenario indicates that 100 percent of the messages received in JNE were received in MANA, highlighting that not all messages sent from MANA appeared to arrive in JNE for injection into the network.
- The study delivered a capability to the Army that can be used to study the performance of the network under a wide variety of conditions and within a framework that is easier and less labor intensive to setup.
Recommendations
- Use the interface to support an operational impact analysis.
- Explore alternatives to MANA.
- Explore porting MANA (or similar agent-based models) to other types of network models, such as the Navy Research Lab effort Extendable Mobile Ad-Hoc Network Emulator (EMANE) or NS-3.
- Revisit Regulation 5-11 and consider how validation can be achieved for different classes of M&S tools, such as agent-based.
Table of Contents
Chapter One
Introduction
Chapter Two
The Interface
Chapter Three
Test Results
Chapter Four
Conclusions and Recommendations
Research conducted by
This research was sponsored by U.S. Army Training and Doctrine Command, Cyber Center of Excellence, Capability Development Integration Directorate, Cyber Battle Lab, and conducted within the Forces and Logistics Program within the RAND Arroyo Center.
This report is part of the RAND Corporation Tool series. RAND tools may include models, databases, calculators, computer code, GIS mapping tools, practitioner guidelines, web applications, and various toolkits. All RAND tools undergo rigorous peer review to ensure both high data standards and appropriate methodology in keeping with RAND's commitment to quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.