Apr 13, 2012
Military campaign assessments are a cornerstone of sound strategic decisionmaking. A RAND study examined the process and methods used to assess the progress of counterinsurgency (COIN) campaigns in an effort to inform ongoing assessments of operations in Afghanistan and, ultimately, contribute to improvements in military doctrine. Beginning in late 2011—after the study had concluded—the International Security Assistance Force (ISAF) in Afghanistan began developing and implementing an improved COIN assessment methodology. These improvements reflect some of the same perspectives provided in other contemporary analyses of the campaign assessment process and in the RAND study, the results of which were circulated among U.S. and ISAF leadership at conferences in 2010-2011. The RAND study found that COIN assessment processes used by the U.S. military do not provide an accurate assessment of the military campaign. This finding highlighted the need for a process to add transparency and context to military campaign assessments, making them more credible and useful at all levels of decisionmaking.
An examination of theater-level assessments conducted in Vietnam, Iraq, and Afghanistan showed that U.S. COIN assessment has long relied on centrally directed data collection and quantitative methods at the expense of context and relevant qualitative information. Traditional approaches, using systems analysis or effects-based assessment, can result in unrealistic requirements and misleading data. The aggregation of these data through averaging or other means results in summary interpretations and graphical representations that may not accurately reflect the strategic situation, including details that are strategically important for resource allocation decisions and monitoring the progress of COIN efforts.
The figure shows how aggregating data can hide important meaning and context when attempting to characterize popular support. In this notional example of a color-coded assessment, each small box represents a village. The black box represents a village that supports the insurgency. Aggregating these data from the district to the theater level without accompanying contextual information hides this small but possibly highly influential insurgent-leaning population at the district level, as well as the "fence-sitters" (unshaded boxes, representing a neutral population) at the provincial level. This village or district could house the 1 percent of the population capable of undermining strategic success, but the strategic assessment would hide this critical insight.
Until late 2011, campaign assessments in Afghanistan were hindered by the problems inherent in centralized, aggregated assessment methods. Efforts to measure, rather than assess, did not account for the complexity and chaos of the COIN environment. The measurement-focused approach to assessment that placed onerous data collection demands on subordinate units did not provide these same units with useful information. The resulting campaign assessment reports left policymakers and the public dissatisfied with the assessment process itself and the quality of the information it provided.
An approach that holds promise to better retain and transmit critical context in campaign assessments is contextual assessment. This technique does away with the sole reliance on aggregated quantitative metrics and instead reflects all available data (quantitative and qualitative) and commanders' input through layered contextual narratives from the battalion to the theater level. Preserving the context at each level of input allows details of strategic interest to be identified more clearly to higher echelons. Contextual analysis also explicitly requires local commanders to explain why data on particular metrics were not collected or what other data were used to shed light on an assessment topic. The final campaign assessment report derived from this layered process has a better chance of accurately reflecting the depth and complexity of subordinate assessments. To meet transparency and credibility objectives, the final report would incorporate component reports, with each having a short narrative explanation of its careful, quantitative analysis.
In addition to improving the assessment process, the U.S. Department of Defense should also involve its interagency partners. Incorporating assessment into professional military education and training and utilizing all-source analysis methods used by the intelligence community would also strengthen the assessment process and products.