Bigger and Better Campaign Models

Will They Improve Decisionmaking?

Richard Hillestad, Louis R. Moore, Bart E. Bennett

Research SummaryPublished 1996

The defense community is investing hundreds of millions of dollars to make computer models more realistic, more comprehensive, and more highly automated. A recent RAND study suggests that this investment may advance computer science but may do little to improve defense analysis. In the first of a set of reports, Modeling for Campaign Analysis: Lessons for the Next Generation of Models, Executive Summary, the authors urge DoD and the Services to balance their investment in models with a concerted effort to make the analytic process more transparent, effective, and open to review and with investments in data development and accessibility. The authors claim that, without these parallel efforts, new models will not fulfill their promise.

The report focuses on the thorniest problems in enhancing campaign analysis that uses models. Experienced analysts and modelers, the authors draw from the lessons they have learned in building and using models for campaign analysis so that their successes can be exploited and their failures avoided. The relatively modest progress in campaign modeling over nearly 50 years of effort, they argue, is due in part to the fact that such lessons are not often broadly shared.

Putting Analysis First

The single-minded focus on developing a more realistic campaign model is based on a dubious assumption: That the model itself is the problem and that, if we can just get it "right," we will improve military decisionmaking. The authors emphasize that the model is but one tool in the analytic process. It is the analyst who defines objectives, measures, and alternatives; chooses the models; tests hypotheses and sensitivities; identifies causes and effects; and elucidates the analysis. The model can only quantify the effects of certain systems, tactics, and strategies on certain measures under specific conditions.

This relationship between analysis and modeling has important implications for model development. For example, the level of detail in a model should be appropriate for the required analytic task. The current popular drive to include as much detail as possible or allow for too wide a range of detail may hinder analysis. The greater the detail in a model, the more difficult it is to identify cause and effect. Depending on the analytic objectives, a simpler model may do a better job. Flexibility in representation—or multiple levels of resolution—within a plausible range should be built into the model, along with the ability to remove portions of the model not germane to the analysis.

Furthermore, analysis is best served by multiple independent models and multiple independent analyses of the same problems. Instead of driving toward a comprehensive supermodel that consolidates the best of many models, the defense community may be better served by developing methods for linking diverse models and analyses to a common point of reference. Comparative analysis often highlights assumptions about data, scenarios, and effectiveness that are not apparent in multiple trials with a single model.

Managing the Development Process

The authors agree that models need to be improved to reflect modern combat. The driving principle behind model development should be to serve analysis rather than to advance technology. Too often, they argue, the analytical question is changed to adapt to the model rather than the other way around.

The report enumerates the issues that need to be addressed once the decision has been made to develop a new model. These include underlying structure, levels of aggregation, and how submodels will be linked to the main model. The choice of structure is one of the most critical because it determines how objects and information can be aggregated and what level of resolution the simulation can represent. The structure affects the adaptability of the simulation to a range of analytic problems, efficiency in processing, transparency of cause and effect, and ease with which future changes can be made. The report identifies components of structure—from the treatment of time and space to the extent of human interaction—and explains which structures are less limiting than others in terms of campaign-level representation.

Despite the importance of command, control, communications, computers, and information (C4I) on the battlefield, this function is the least understood and most poorly modeled in campaign-level simulations. Some models simulate nearly perfect battlefield intelligence, and each side has unrealistically perfect knowledge of the other side's positions, forces, capabilities, and intents. Many models use as inputs scripted force allocations that do not adapt to changes in situation during the simulated battle. The authors recommend research into the different ways human behavior can be simulated in models—interactive or human-in-the-loop, scripted decisions, rule-based or expert systems, tactical algorithms, value-driven methods, learning algorithms, and objective-driven optimization and gaming—and suggest that any new model incorporate structures that allow for the greatest flexibility in representing human decisionmaking so that the analyst can select the method that best suits the analysis.

Overcoming Data Problems

Preparing and verifying data—the most time-consuming aspect of most analysis—can be made more efficient by improving data input and output processes and standards, creating graphical user interfaces that limit check and cross-check, and improving data-aggregation functions. According to the authors, such steps are as important to analysis as enhancing the models themselves.

Aggregating data from higher-resolution models requires a knowledgeable analyst who can determine when the approximation is good enough, what cases to use from the high-resolution model, and which parameters to adjust—time, geographic dispersion, types of objects or measures—to achieve a representative database. The authors recommend that the approach, data, and tests of approximations be published and subject to peer review to help establish credibility and consistency.

In fact, a common effort by the defense community in sharing data, critiquing data generation, and testing different approaches to linking models and aggregating data could reap dramatic benefits. The quality and timeliness of analysis is suffering from too many redundant efforts to develop databases and too little testing and review of the appropriateness of selected sources and embedded assumptions.

Achieving Transparency

The greatest challenge of modeling for analysis, especially with large campaign models, is understanding the cause and effect relationships that occur within the model, that is, identifying how changes in the inputs influence changes in outcomes and how these are in turn affected by other data or assumptions. Such transparency is only partly inherent in the model; it is largely the result of how an analyst uses the model. Although there is no substitute for the experience of a practiced analyst, the authors recommend techniques for improving transparency, such as offering education and training to analysts, conducting various kinds of sensitivity tests, and performing comparative analysis with different models or different uses of the same model. Again, the emphasis is on improving analysis rather than on increasing model verisimilitude.

The authors recommend that the DoD go beyond its current review of campaign modeling to conduct a similar review of campaign-level analysis. A committee of experts, users, and analysts would focus on the kinds of model-based analysis being done within current DoD organizations and would examine analytic approaches, data management, the role of models in analysis, and the effect of the analysis on defense policy decisions. The authors contend that a formal process for reviewing model-based analysis—along with broad investments in data preparation—would yield greater benefits than the exclusive focus on the models themselves.

Topics

Document Details

Citation

RAND Style Manual
Hillestad, Richard, Louis R. Moore, and Bart E. Bennett, Bigger and Better Campaign Models: Will They Improve Decisionmaking? RAND Corporation, RB-39, 1996. As of October 8, 2024: https://www.rand.org/pubs/research_briefs/RB39.html
Chicago Manual of Style
Hillestad, Richard, Louis R. Moore, and Bart E. Bennett, Bigger and Better Campaign Models: Will They Improve Decisionmaking? Santa Monica, CA: RAND Corporation, 1996. https://www.rand.org/pubs/research_briefs/RB39.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.