Using the National Training Center Instrumentation System to Aid Simulation-Based Acquisition

by Andrew Cady

Download eBook for Free

FormatFile SizeNotes
PDF file 2.1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Though current data sources for simulation models used in the United States Department of Defense (DoD) acquisition process are many and varied, none adequately represents how weapon systems behave in combat in a robust, quantifiable manner, leading to uncertainty in the acquisition decision making process. The objective of this dissertation is to improve this process by developing empirically derived measures of direct fire behaviors from U.S. Army National Training Center (NTC) data and by demonstrating how these measures can be used to support acquisition decisions based on the output of simulation-based modeling. To accomplish this, I employ a three-part methodology.

First, I identify the current data sources for models and simulations used in the defense acquisition process. Of these four data sources — historical combat, operational testing, other simulations, and subject matter expert (SME) judgment — no single source can adequately describe combat behaviors of weapon systems across a wide range of operational environments.

Second, I turn to the NTC data as a potential solution to this gap in data sources. I first examine prior NTC-based research and lessons this literature holds for current and future research. I examine the doctrinal underpinnings of maneuver combat behaviors, deriving five important aspects of direct fire — two of which are operationalized in this dissertation: direct fire engagement and movement and maneuver. I examine the NTC instrumentation system data generation process, strengths, and drawbacks to determine if it could measure these. Finally, I describe four measures that I derive to describe maneuver combat behavior — weapon system probability of hit, weapon system rate of fire, unit dispersion, and unit speed.

Third, I compare the measures derived in this dissertation against baseline measures from the Joint Conflict and Tactical Simulation (JCATS) simulation model to determine the difference the two sources of measures — actual NTC behavior and JCATS baseline — make in three outcomes: exchange ratio, drawdown of forces rate, and volume of fire. To perform this comparison, I create a scenario based on prior simulation studies along with four excursions to test the influence of changes in mission, enemy, and terrain on the impact of data source. I analyze the results of 300 runs of the JCATS model using a series of linear regressions. For each excursion, regression models indicate a highly significant effect of data source on each model outcome.

I conclude this dissertation with a recommendation that the measures described herein form the basis for a larger system of NTC-based behavioral measurement for modeling and simulation (M&S) data. I also recommend several software and hardware improvements to the NTC instrumentation system that could improve its utility as both a data source and a training resource. As future research, I recommend applying advanced analytic techniques to these data, applying these methods to other combat training centers and applying these measures to training and tactics development.

Table of Contents

  • Chapter One

    Introduction

  • Chapter Two

    Current Sources of Data for Combat Simulation Modeling

  • Chapter Three

    Deriving Behavioral Combat Measures from NTC-IS

  • Chapter Four

    Testing the Difference in JCATS Model Outcomes from Using NTC-based Data

  • Chapter Five

    Conclusions and Policy Recommendations

  • Appendix A

    Pairing Fires and Hits in NTC-IS

  • Appendix B

    Line of Sight Algorithm Description and Assumptions

  • Appendix C

    Additional NTC-IS Measures Not Tested in this Research

  • Appendix D

    Verification and Validation of the NTC-IS Data

  • Appendix E

    Full Results of NTC-IS Analysis

  • Appendix F

    Regression Model Specifications and Diagnostics

Research conducted by

This document was submitted as a dissertation in September 2017 in partial fulfillment of the requirements of the doctoral degree in public policy analysis at the Pardee RAND Graduate School. The faculty committee that supervised and approved the dissertation consisted of Bryan Hallmark (Chair), Joe Martz, and Randall Steeb.

This publication is part of the RAND Corporation Dissertation series. Pardee RAND dissertations are produced by graduate fellows of the Pardee RAND Graduate School, the world's leading producer of Ph.D.'s in policy analysis. The dissertations are supervised, reviewed, and approved by a Pardee RAND faculty committee overseeing each dissertation.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.