A Guide for Analysis Using Advanced Distributed Simulation (ADS)

by Thomas W. Lucas, Robert Kerchner, John A. Friel, Daniel Jones

Download

Full Document

FormatFile SizeNotes
PDF file 5.5 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback113 pages $13.00 $10.40 20% Web Discount

Part of the promise of Advanced Distributed Simulation (ADS) and justification for the large investment in ADS technologies is that ADS will revolutionize how analysts do business. Presently, with ADS still in its infancy, the majority of its uses have been technology development, training, and demonstrations; however, ADS for analysis is rapidly becoming a reality and may affect important decisions. Pioneering analysis efforts include the Airborne Laser (ABL) tests, the Anti-Armor Advanced Technical Demonstration (A2ATD) experiments, and a joint Air Force and Navy effort to study whether and how the Cooperative Engagement Capability (CEC) should be extended to the Airborne Warning and Control System (AWACS). These efforts are finding a high potential benefit and a steep learning curve associated with ADS. The intent of this guide is to acquaint potential users and consumers with ADS so that it is used properly, and does not suffer from excessive optimism followed by dashed expectations.

The Analytic Potential of ADS

In theory, there are potentially many analytic benefits from ADS technologies that extend and augment traditional analytic methods. It is ADS technologies that can provide the foundation for building the Joint Synthetic Battlespace (JSB) at the heart of the Air Force's A New Vector (1995). The other services and the Department of Defense (DoD) have similar hopes for ADS that stem from the vision that ADS will mature enough to be able to realistically simulate a seamless synthetic theater of war (STOW)—with joint live, virtual, and constructive elements participating. The elements will all share the same virtual battlespace even though they are hosted at distributed homebases. While the JSB ideal is still only a vision, properly used, today's ADS can provide tremendous analytic utility. ADS currently has the capability to

  • provide a realistic treatment of human performance, a notable weakness with constructive simulations,
  • obtain insights into the cause-and-effect "drivers" of combat, which can be extremely useful when developing tactical concepts to enhance or defend against new weapons systems,
  • communicate analytic results to decisionmakers more effectively,
  • facilitate multidisciplinary research teams that explicitly include warfighters—thus accruing credibility, and
  • enable the combining of multiple disparate service simulations into a single joint simulation for theaterwide scenarios with service-accredited models at previously unobtainable levels of detail.

Challenges in Analytic Applications of ADS

Significant challenges must be overcome before the full ADS analysis potential can be realized. Some of the more important are:

  • The sheer complexity of a distributed joint STOW. Each component has its own specific assumptions and limitations. Accounting for these is critical in determining whether simulation results are credible or merely simulation artifacts.
  • The difficulties associated with exclusive use of human-in-the-loop (HITL) analysis. The most important are (1) restriction to real-time, which precludes exploring many scenario variations or achieving statistical precision, and (2) human factors such as getting representative samples, participant learning and gaming, participant boredom, and exact reproducibility.
  • The logistical load and expense of distributed efforts, which are significantly greater than for single-suite simulations and simulators. Not only are the simulations distributed, but the expertise and much of the data are as well.

ADS Within a Broader Research Plan

Figure S.1. Interplay of Constructive and ADS/HITL Experiments in an Analysis Effort

The analytically oriented HITL ADS projects we have seen logically require a balanced mix of ADS and more traditional methods, and should not exclusively rely on ADS exercises as the source of analytic information. In fact, we see a natural synergy between ADS and traditional methods; each supplies strength to the other's weaknesses. Traditional methods allow for greater control and more factors to be varied—ideal for identifying critical variables or scenarios. ADS facilitates joint high-resolution scenarios with warfighters representing the important human dimension.

The above suggests the following analytic roles for ADS and traditional analysis: Use ADS primarily to inform about human performance factors in constructive models, cross-check constructive model results, and assess warfighter elements in a few carefully designed scenarios; use (relatively) inexpensive constructive models primarily to focus the limited ADS runs on the most important cases and perform the bulk of the exploration (after being informed by ADS). This constructive-ADS iteration continues as time and money permit.

Of course, there may be issues, such as human factors, for which ADS can reliably be used exclusively. Furthermore, there may be issues that can be effectively addressed by traditional stand-alone constructive methods. For the many issues that benefit from a mixture of both ADS and traditional methods, Figure S.1 illustrates how this concept might look in the context of an analysis where ADS plays a significant role. Although the process shown here is idealized, we believe that it serves both as a practical guide for combined ADS and traditional analysis and as a goal to be achieved. ADS is used in three distinct ways in the scheme. The ADS experiment block (Block 1) in Figure S.1 refers to the use most visible to the consumers of the analysis, and corresponds to the high-value ADS runs for scenarios of interest. A second use of ADS is human-in-the-loop experiments aimed at human performance factors (HPFs) in the constructive simulations (Block 2). Furthermore, ADS can be used in a preliminary exploratory manner to identify HPFs that are likely scenario drivers (also within Block 2).

Key ADS Analysis Issues

Some critical points above and beyond the central theme of ADS in a broader analysis perspective include:

  • Interoperability is not guaranteed by compliance to standards. Unless great care is taken, the lack of interoperability will bias simulation outcomes. Key aspects to consider are differences in data, algorithms, resolution, terrain, visual displays, and human participants. Often these are difficult to compare theoretically. We recommend interoperability be studied through an iterative series of increasingly larger empirical tests among components.
  • HITL ADS runs are a precious commodity and must be designed with great care, rather than executed as free play. The ADS runs will be most valuable if they are designed to address specific hypotheses.
  • The high dimensionality and few samples available in HITL ADS experiments mean the effort will benefit from advanced design of experiment (DOE) techniques.
  • Analysis in a training environment greatly restricts the types of analysis one can perform.
  • A successful effort requires multidisciplinary participation in the total analysis process, to include analysts, site managers, modelers, operators, other warfighters, and network managers.
  • The complexity of large distributed efforts puts an added burden on testing and rehearsals. The rehearsals should include a mock analysis to ensure the needed information can be obtained.
  • Post-exercise analysis should include after-action reviews, statistics, visual replays, and allow for the insertion of new objects for visualization and analysis.
  • Model and data freeze dates must be established and adhered to.
  • ADS experiments involve an inevitable reduction in reliability, i.e., simulation or network failure. The situation should be planned for—including a real-time contingency playbook.

Conclusion

We believe that ADS has great potential for increasing the effectiveness, scope, and depth of analysis, but the role of ADS in an analysis must be carefully specified. In combination with traditional methods, ADS can more credibly represent human interactions and improve this critical component of our models, whereas traditional methods can be used to examine a greater breadth of cases and focus on those conditions where ADS methods are essential. These benefits will not be gained without overcoming a variety of technical, operational, and administrative challenges. In particular, we feel that resolving problems with interoperability among models is essential. Unfortunately, "plug and play" interoperability has not been successfully addressed in contexts that are much simpler than distributed combat simulation. Thus, there is little reason to expect that these challenges can be successfully solved, for general ADS combat analysis purposes, in the near future.

To improve model interoperability we need to establish well-accepted approaches to representing combat elements, document the models and standards used, build up trusted and tested implementations through frequent and wide use of the models, and provide easy accessibility to the models. Further research in these areas is needed if ADS is to become an oft-used and credible vehicle for analysis. Moreover, given that we believe ADS is often best used in conjunction with stand-alone constructive simulations, investments must also be made in these models and the analysis methods that use them.

Download the Full Report ⤴

Table of Contents

  • Chapter 1

    Introduction and Purpose

  • Chapter 2

    What Are ADS and Its Roles in Analysis?

  • Chapter 3

    High-Level Design

  • Chapter 4

    Detailed Design

  • Chapter 5

    Exercise Preparation

  • Chapter 6

    Integration and Testing

  • Chapter 7

    Exercise Management

  • Chapter 8

    Postexercise

  • Chapter 9

    Verification, Validation, and Accreditation

  • Chapter 10

    Effects of Using a Training/Operational Exercise for Analysis

  • Chapter 11

    Conclusion

  • Appendix A

    Expanded Discussion on Design of Experiments

  • Appendix B

    Community Actions to Facilitate Analysis

  • Bibliography

Research conducted by

This work was performed in the Effective Application of Advanced Distributed Simulation project in RAND's Project AIR FORCE. This project is sponsored by Major General Tom Case of AF/XOM.

This report is part of the RAND Corporation Monograph report series. The monograph/report was a product of the RAND Corporation from 1993 to 2003. RAND monograph/reports presented major research findings that addressed the challenges facing the public and private sectors. They included executive summaries, technical documentation, and synthesis pieces.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.