The U.S. decision to launch military action against Iraq was heavily influenced by a belief that Iraq possessed weapons of mass destruction. It was thought that, if not destroyed, these WMDs could threaten the security of the United States. Thus far, the weapons have not been found, although they may be in the future. On the other hand, they may not be.
In light of this possibility, the media, the U.S. Congress, and the intelligence community on both sides of the Atlantic have begun to focus on whether the absence of WMDs in Iraq would imply that the intelligence on which the prior belief was based was either flawed, or was deliberately slanted. Many people—especially, but not only, those who had originally opposed the war in Iraq—would answer this question affirmatively.
They would be wrong: an unexpected outcome from an inescapably probabilistic estimate does not signify that the prior estimate was flawed or slanted. Intelligence estimates in general are inherently uncertain, which is to say that they are probabilistic. Estimates made about something to be found or experienced in the future can at most only lead to a conclusion that there is a conjectured probability that a specified outcome will be realized.
U.S. Secretary of State Colin Powell's strong presentation to the U.N. Security Council on February 5 cited cell-phone intercepts, satellite imagery, and other information sources to support the belief that Iraq possessed WMD. Yet, no matter how compelling the evidence, the inference from it was probabilistic. Yesterday's evidence, no matter how abundant and compelling, can only yield an estimate that there's a high probability—never a certainty—of what will be found tomorrow. Tangible evidence, let alone circumstantial clues, can only warrant an inference that the probability of one particular outcome—in this case, Iraq's possession of WMD—is higher than that of another.
If despite these relative probabilities WMD are not found, this outcome does not imply that the prior estimate was wrong. The prior estimate may have been quite accurate even if the unexpected outcome occurs. After all, this unexpected outcome may not only be attributable to the absence of WMD, but also to the possibility that weapons possessed by Iraq prior to the start of the war were subsequently destroyed, moved to another country or, in the case of chemical and biological weapons, decomposed into relatively inconspicuous and innocuous precursor elements or agents.
This line of reasoning raises two central questions that have been largely ignored in the debate about the elusive or nonexistent WMDs in Iraq. The first question is how to make intelligence estimates and estimators accountable.
Unexpected outcomes may ensue because of faulty estimates, or due to other factors such as those above. Which gives rise to the question—how can intelligence users, let alone the general public, know whether the occurrence of an unexpected outcome resulted from the range of uncertainty or from the incompetence of the estimators?
The laws of probability suggest an answer. If an unexpected outcome ensues once or twice, it may not be surprising or conclusive: for example, if there were something like a five-to-one probability that Iraq had WMD, but in fact none is found, this would hardly provide grounds for faulting the estimate. However, if for several, unrelated estimates—for example, the probability of North Korea's development of nuclear weapons and delivery capabilities—unexpected outcomes recur, then the likelihood that the estimators and the estimation process are broken and need repair rises exponentially.
The second question is whether the war in Iraq should have been delayed until even more conclusive evidence of Iraq's possession of WMD had been acquired or, to the contrary, some compelling evidence of Iraq's non-possession of WMD was brought to light?
The answer requires recognition of two different types of error that decision makers confront, explicitly or implicitly. One type may result if the decision maker supposes that a particular outcome will materialize—say, that Iraq has (or did have) WMD—but, despite the high probability associated with this outcome, it turns out that this supposition is wrong. The second type of error is the reverse: if the decision maker supposes that a different outcome will materialize—for example, Iraq doesn't (or didn't) have WMD—but instead it turns out that this supposition is wrong—-namely, that Iraq really does (or did) have WMD.
The decision-maker's dilemma is to choose which of the two possible errors is less hazardous to accept, or more important to avoid.
The Bush administration clearly decided that the second type of error was of so much graver concern for the security interests of the United States that the risk of making this error had to be avoided. Whether one agrees with this decision (which I do), or disagrees with it, there's no question that in the final analysis it is precisely the sort of judgment that the American public pays the president to make.
Mr. Wolf is senior economic adviser and corporate fellow in international economics at RAND, and a senior research fellow at the Hoover Institution.
This commentary originally appeared in Wall Street Journal on July 18, 2003. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.