Jan 1, 1996
Will They Improve Decisionmaking?
The defense community is investing hundreds of millions of dollars to make computer models more realistic, more comprehensive, and more highly automated. A recent RAND study suggests that this investment may advance computer science but may do little to improve defense analysis. In the first of a set of reports, Modeling for Campaign Analysis: Lessons for the Next Generation of Models, Executive Summary, the authors urge DoD and the Services to balance their investment in models with a concerted effort to make the analytic process more transparent, effective, and open to review and with investments in data development and accessibility. The authors claim that, without these parallel efforts, new models will not fulfill their promise.
The report focuses on the thorniest problems in enhancing campaign analysis that uses models. Experienced analysts and modelers, the authors draw from the lessons they have learned in building and using models for campaign analysis so that their successes can be exploited and their failures avoided. The relatively modest progress in campaign modeling over nearly 50 years of effort, they argue, is due in part to the fact that such lessons are not often broadly shared.
The single-minded focus on developing a more realistic campaign model is based on a dubious assumption: That the model itself is the problem and that, if we can just get it "right," we will improve military decisionmaking. The authors emphasize that the model is but one tool in the analytic process. It is the analyst who defines objectives, measures, and alternatives; chooses the models; tests hypotheses and sensitivities; identifies causes and effects; and elucidates the analysis. The model can only quantify the effects of certain systems, tactics, and strategies on certain measures under specific conditions.
This relationship between analysis and modeling has important implications for model development. For example, the level of detail in a model should be appropriate for the required analytic task. The current popular drive to include as much detail as possible or allow for too wide a range of detail may hinder analysis. The greater the detail in a model, the more difficult it is to identify cause and effect. Depending on the analytic objectives, a simpler model may do a better job. Flexibility in representation—or multiple levels of resolution—within a plausible range should be built into the model, along with the ability to remove portions of the model not germane to the analysis.
Furthermore, analysis is best served by multiple independent models and multiple independent analyses of the same problems. Instead of driving toward a comprehensive supermodel that consolidates the best of many models, the defense community may be better served by developing methods for linking diverse models and analyses to a common point of reference. Comparative analysis often highlights assumptions about data, scenarios, and effectiveness that are not apparent in multiple trials with a single model.
The authors agree that models need to be improved to reflect modern combat. The driving principle behind model development should be to serve analysis rather than to advance technology. Too often, they argue, the analytical question is changed to adapt to the model rather than the other way around.
The report enumerates the issues that need to be addressed once the decision has been made to develop a new model. These include underlying structure, levels of aggregation, and how submodels will be linked to the main model. The choice of structure is one of the most critical because it determines how objects and information can be aggregated and what level of resolution the simulation can represent. The structure affects the adaptability of the simulation to a range of analytic problems, efficiency in processing, transparency of cause and effect, and ease with which future changes can be made. The report identifies components of structure—from the treatment of time and space to the extent of human interaction—and explains which structures are less limiting than others in terms of campaign-level representation.
Despite the importance of command, control, communications, computers, and information (C4I) on the battlefield, this function is the least understood and most poorly modeled in campaign-level simulations. Some models simulate nearly perfect battlefield intelligence, and each side has unrealistically perfect knowledge of the other side's positions, forces, capabilities, and intents. Many models use as inputs scripted force allocations that do not adapt to changes in situation during the simulated battle. The authors recommend research into the different ways human behavior can be simulated in models—interactive or human-in-the-loop, scripted decisions, rule-based or expert systems, tactical algorithms, value-driven methods, learning algorithms, and objective-driven optimization and gaming—and suggest that any new model incorporate structures that allow for the greatest flexibility in representing human decisionmaking so that the analyst can select the method that best suits the analysis.
Preparing and verifying data—the most time-consuming aspect of most analysis—can be made more efficient by improving data input and output processes and standards, creating graphical user interfaces that limit check and cross-check, and improving data-aggregation functions. According to the authors, such steps are as important to analysis as enhancing the models themselves.
Aggregating data from higher-resolution models requires a knowledgeable analyst who can determine when the approximation is good enough, what cases to use from the high-resolution model, and which parameters to adjust—time, geographic dispersion, types of objects or measures—to achieve a representative database. The authors recommend that the approach, data, and tests of approximations be published and subject to peer review to help establish credibility and consistency.
In fact, a common effort by the defense community in sharing data, critiquing data generation, and testing different approaches to linking models and aggregating data could reap dramatic benefits. The quality and timeliness of analysis is suffering from too many redundant efforts to develop databases and too little testing and review of the appropriateness of selected sources and embedded assumptions.
The greatest challenge of modeling for analysis, especially with large campaign models, is understanding the cause and effect relationships that occur within the model, that is, identifying how changes in the inputs influence changes in outcomes and how these are in turn affected by other data or assumptions. Such transparency is only partly inherent in the model; it is largely the result of how an analyst uses the model. Although there is no substitute for the experience of a practiced analyst, the authors recommend techniques for improving transparency, such as offering education and training to analysts, conducting various kinds of sensitivity tests, and performing comparative analysis with different models or different uses of the same model. Again, the emphasis is on improving analysis rather than on increasing model verisimilitude.
The authors recommend that the DoD go beyond its current review of campaign modeling to conduct a similar review of campaign-level analysis. A committee of experts, users, and analysts would focus on the kinds of model-based analysis being done within current DoD organizations and would examine analytic approaches, data management, the role of models in analysis, and the effect of the analysis on defense policy decisions. The authors contend that a formal process for reviewing model-based analysis—along with broad investments in data preparation—would yield greater benefits than the exclusive focus on the models themselves.