Tangled Web

Cyberwar Fears Pose Dangers of Unnecessary Escalation

By Martin C. Libicki

Mar tin Libicki is a management scientist at the RAND Corporation.

In their zeal to protect themselves in cyberspace, countries need to ensure that they do not trigger even greater threats beyond cyberspace, particularly military or economic forms of retaliation. At a time when the reported level of cyber incidents continues to rise and when cyber risks are perceived as growing even faster, the odds are increasing that a country will find itself in a cyber crisis. Such a crisis could take many different forms: the escalation of tensions associated with an actual, major cyberattack; the suspicion that such an attack has already occurred and must be countered; or the simple fear that an attack might soon occur and must be preempted.

Cyber crises are less likely to emanate from the unavoidable features of cyberspace than from each side's fear, often exaggerated, of what might result from its failure to respond. To avoid the unnecessary escalation of such crises, national cyberdefense agencies should monitor the messages and signals they send out about their own cyberoperations, sharpen their analyses of how potential adversaries would likely perceive the escalatory aspect of offensive strategies, and take additional cautionary measures to manage perceptions.

Thick Fog of Cyberspace

The normal human intuition about how things work in the physical world does not always translate well into cyberspace. The effects, and sometimes even the fact, of cyberoperations can be obscure. The source of the attacks may not be obvious; the attacker must claim them, or the defender must attribute them. Even if the facts are clear, their interpretations may not be; even when both are clear, policymakers may not necessarily understand them.

The subjective factors of cyberwar pave paths to inadvertent conflict.

The subjective factors of cyberwar pave paths to inadvertent conflict. Uncertainties about allowable behavior, misunderstandings of defensive preparations as offensive ones, errors in attribution, unwarranted confidence that cyberattacks are low-risk because they are hard to attribute, and misinterpreting the norms of neutrality — these are all potential sources of instability and crisis. Here are three examples of the kind of perils that lurk:

Computer network exploitation — espionage, in short — can foster misperceptions and possibly conflict. Everyone spies on everyone, even allies. But one side tires of having its networks penetrated. Perhaps the frequency and volume of exploitation crosses some unclear red line; or the hackers simply make a mistake tampering with systems to see how they work and unintentionally damage something.

One side's defensive preparations can give the other side the notion that its adversary is preparing for war. Likewise, preparing offensive capabilities for possible eventual use could be perceived as an imminent attack. Because much of what goes on in cyberspace is invisible, what one state perceives as normal operating procedure, another could perceive as just about anything.

Because much of what goes on in cyberspace is invisible, what one state perceives as normal operating procedure, another could perceive as just about anything.

Difficulties of attribution can muddle an already confused situation. Knowing who actually does something in cyberspace can be quite difficult. The fact that numerous attacks can be traced to the servers of a specific country does not mean that the country launched an attack or even that it originated in that country. Even if it did originate there, this fact does not mean that the state is complicit. The attack could have been launched by a cybercriminal cartel that took over local servers, or some third party could have wanted it to look as though the state launched an attack.

A great deal depends on whether other states are perceived as basically aggressive (and must be stopped) or defensive (and can be accommodated). During the Cuban missile crisis, many of President John F. Kennedy's advisers thought they saw another Munich 1938: A failure to respond forcefully would embolden the Soviet Union, discourage allies, and sow the seeds for a later confrontation when the United States would be in a worse position. President Kennedy, however, saw the potential for another Sarajevo 1914; he carried Barbara Tuchman's Guns of August around with him, urging his advisers to read it. His perception shows great concern with stumbling inadvertently into a nuclear war because one side's moves caused the other side to react in a hostile manner, forcing the first side to react accordingly, and so on.

In the past 20 years, there have been plenty of instances of cybercrime and cyberespionage. But there have been only three and a half cyberattacks that could conceivably rise to the level of a cyberwar: the 2007 attacks against Estonia, similar attacks on Georgia in 2008, the Stuxnet worm directed at Iranian nuclear facilities in 2009–2010, and what may have been a cyberattack on Syrian radar prior to an Israeli air strike on a supposed nuclear reactor in 2007. Of these, all but one (Stuxnet) was unaccompanied by physical violence. In part for this reason, none of these engendered a genuine cyber crisis.

In the future, however, a crisis can start over nothing at all. Because preparations for cyberattack are often generally invisible (if they are to work), there is little good evidence that can be offered to prove that one state is not starting to attack another. A state may try to assuage fears touched off by otherwise unmemorable incidents by demanding proof that the other side is not starting something. If proof is not forthcoming (and what would constitute proof, anyway?), matters could escalate.

In such cases, states should take the time to consider escalation carefully. There is little to be gained from an instant response. Cyberattacks cannot disarm another side's ability to respond in kind. Each state should also anticipate the other side's reaction to escalation, particularly what others may infer about one's intentions. In cyberspace, as political scientist Barry Posen has written, "the defender frequently does not understand how threatening his behavior, though defensively motivated, may seem to the other side."

Perhaps the security folks have just won their bureaucratic argument against the laissez-faire folks.

Consider what happens when a state's preparation level unexpectedly rises. Perhaps the security folks have just won their bureaucratic argument against the laissez-faire folks. System administrators could be reacting to a news item, such as discovery of the Stuxnet worm. Maybe some laboratory demonstration revealed how vulnerable the state's key systems were — or, conversely, how easy it would be to secure them if new technologies were employed. But potential adversaries may have no insight into which motivations were present. They might assume that whatever the suddenly better-defended state does is all about them. Thus, they reason, such preparations can only be prefatory to attack. This might lead a potential adversary to attack first or take other actions that are interpreted by the presumed soon-to-be attacker in the worst way. Crisis follows.

As in the physical world, what one state regards as standard operating procedure may be interpreted as anything but by another state. But in cyberspace, two factors exacerbate the problem. First, because states constantly penetrate one another's computer networks, they can observe many things about each other, but only in partial ways, which could lead to false conclusions and miscalculations. Second, cyberwar is too new and untested for a universal set of standard operating procedures — much less a well-grounded understanding of another state's standard operating procedures — to have evolved.

For all these reasons, cyberwar may not be seen as it actually is, and states may react out of fear rather than reflection. An action that one side perceives as innocuous may be seen as nefarious by the other. Fortunately, mistakes in cyberspace do not have the potential for catastrophe that mistakes in the nuclear arena do. Unfortunately, that fact may prevent leaders from exercising their normal caution in crisis circumstances. Paradoxically, although the systemic features of cyber crises lend themselves to resolution (there is little pressure to respond quickly, and there are grounds for giving the other side some benefit of the doubt), the fretful perceptions of cyberoperations as they opaquely unfold may drive participants toward conflict.

Even if cyberwar can be used to disrupt life on a mass scale, it cannot be used to occupy another nation's capital. It cannot force regime change. No one has yet died from it.

Cautionary Guidelines

To manage crises and forestall their escalation in cyberspace, the following seven points may be usefully kept in mind.

The first is to understand that the answer to the question — Is this cyberattack an act of war? — is a decision, not a conclusion. Even if cyberwar can be used to disrupt life on a mass scale, it cannot be used to occupy another nation's capital. It cannot force regime change. No one has yet died from it. A cyberattack, in and of itself, does not demand an immediate response to safeguard national security. The victim of a cyberattack could declare that it was an act of war and then go forth and fight — or the victim could look at policies that reduce the pain without so much risk, such as by fixing or forgoing software or network connections whose vulnerabilities permitted the cyberattack in the first place.

Second is to take the time to think things through. Unlike with nuclear war, a nation's cyberwar capabilities cannot be disarmed by a first strike. There is not the same need to get the jump on the other guy — or to match his offense with your offense when it is your defense that dictates how much damage you are likely to receive.

Perhaps more than any other form of combat, cyberwar is storytelling — appropriately for a form of conflict that means to alter information.

Third is to understand what is at stake — which is to say, what you hope to gain. With cyberattack, what you are trying to prevent is not the initial attack but the next attack, the effects of which might be larger than the initial attack but might also be smaller. (The latter is particularly true if the initial attack teaches the victims that, say, making industrial controls accessible to the Internet may not have been the smartest idea.)

Fourth is not to take possession of the crisis unnecessarily. That is, do not back yourself into a corner where you always have to respond, whether doing so is wise or not. It is common, these days, to emphasize the cost and consequences of a cyberattack as a national calamity. Having created a demand among the public to do something, the government is then committed to doing something even when doing little or nothing is called for. Emphasizing the pain from a cyberattack also fuels the temptation of others to induce such pain. Conversely, fostering the impression that a great country can bear the pain of cyberattacks, keep calm, and carry on reduces the temptation.

Fifth is to craft a narrative that can take the crisis where you want it to go. Narratives are morality plays in which events take their designated place in the logical and moral scheme of things: "We are good, you are bad"; "we are strong and competent, unless we have stumbled temporarily because of your evil." Narratives also have to find a role for the attacker, and the development of such a role may, in some cases, encourage the attacker's graceful and face-saving retreat from belligerence. After all, the odds that an attack in cyberspace arises from miscalculation, inadvertence, unintended consequences, or rogue actors are nontrivial. Perhaps more than any other form of combat, cyberwar is storytelling — appropriately for a form of conflict that means to alter information.

The effect is akin to playing tennis on a rock-strewn court.

Sixth is to figure out what norms of conduct in cyberspace, if any, work best. In March 2013, the United States and China agreed to carry out high-level talks on cyber norms. Particularly useful norms are those that can be monitored before any war starts. These include norms that pledge nations to cooperate in investigating cybercrimes, that sever bonds between a state and its hackers or commercially oriented cybercriminals, and that frown deeply on espionage on networks that support critical public services (such as electrical power). Working toward useful norms may well help reduce the likelihood of a crisis, but it would be unrealistic to believe that they can eliminate the possibility.

Seventh is to recognize what a crude tool counter-escalation may be for influencing the other side. In cyberspace, what the attacker does, what he thinks he did, and what the defender thinks he did may all be different. Then there's the similar difference between the defender's response and the attacker's perception of what was done in return. The attacker may think the retaliation was proportional, was understated, or went overboard in crossing red lines — red lines presumably not crossed by himself. The effect is akin to playing tennis on a rock-strewn court.

In sum, while it is worthwhile to prevent what some have characterized as a "future 9/11 in cyberspace," similar levels of care and thought need to be given to how to manage a potential 9/12 in cyberspace. If not, countries may find, as with the historical 9/11, that the consequences of the reaction and counter-reaction are more serious than the consequences of the original action. square