Developing an Objective, Repeatable Scoring System for a Vulnerability Equities Process

commentary

(Lawfare)

Hawaii Air National Guardsmen evaluate network vulnerabilities during the Po’oihe 2015 Cyber Security Exercise at the University of Hawaii Manoa Campus, Honolulu, HI, June 4, 2015, photo by Airman 1st Class Robert Cabuco/Hawaii Air National Guard

Hawaii Air National Guardsmen evaluate network vulnerabilities during the Po'oihe 2015 Cyber Security Exercise at the University of Hawaii Manoa Campus, Honolulu, HI, June 4, 2015

Photo by Airman 1st Class Robert Cabuco/Hawaii Air National Guard

by Sasha Romanosky

February 5, 2019

The public release of the Vulnerability Equities Process (VEP) charter (PDF) by the White House in late 2017 went a long way toward satisfying the public's curiosity about the secretive, high-profile and contentious process by which the U.S. government decides whether to temporarily withhold or publicly disclose zero-day software vulnerabilities—that is, vulnerabilities for which no patches exist. Just recently, the U.K. government publicly released information about its Equities Process as well.

The U.S. and U.K. charters are similar in the overall structure of the process and the list of criteria they use for determining whether to disclose or retain a vulnerability. In effect, they are comparing and balancing offensive equities with defensive equities. Offensive equities are those that benefit the intelligence community when a vulnerability is temporarily withheld and used for intelligence collection, while defensive equities refers to the benefits individuals and businesses experience from knowing about vulnerabilities and being able to protect their private computers.

The U.S. charter further states that to “the extent possible and practical, determinations to disclose or restrict will be based on repeatable techniques or methodologies that enable benefits and risks to be objectively evaluated by VEP participants.” This raises a question: If the U.S., the U.K. or any other government sought to create an objective framework for decision making, what might that look like? In particular, what questions should be included, how should they influence the outcome and how can one interpret the results?…

The remainder of this commentary is available at lawfareblog.com.


Sasha Romanosky is a policy researcher at the nonprofit, nonpartisan RAND Corporation where he researches topics on the economics of security and privacy, national security, applied microeconomics, and law & economics.

This commentary originally appeared on Lawfare on February 4, 2019. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.