Four Fallacies of AI Cybersecurity

commentary

Aug 23, 2024

A "data shield" depicted to illustrate cybersecurity, image by BlackJack3D/Getty Images

Photo by BlackJack3D/Getty Images

As with many emerging technologies, the cybersecurity of AI systems has largely been treated as an afterthought. The lack of attention to this topic, coupled with increased realization of both the potential and perils of AI, has opened the door for the development of various AI cybersecurity models—many of which have emerged from outside the cybersecurity community. Absent active engagement, the AI community is now positioned to have to relearn many of the lessons that have been developed by software and security engineering over many years.

To date, the majority of AI cybersecurity efforts do not reflect the accumulated knowledge and modern approaches within cybersecurity, instead tending toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes. I'll use the term “fallacies” to describe four such categories of thought:

Cybersecurity is linear. The history of cybersecurity is littered with attempts to define standards of action. From the Orange Book (PDF) to the Common Criteria, pre-2010s security literature was dominated by attempts to define cybersecurity as an ever-increasing set of steps intended to counter an ever-increasing cyber threat. It never really worked. Setting compliance as a goal breeds complacence and undermines responsibility.

Starting in the 2010s with the NIST RMF framework, the cybersecurity community came to the realization that linear levels of increasing security were damaging to the goals of cybersecurity. Accepting that cybersecurity is not absolute and must be placed in context shifted the dialogue away from level-based accreditation and toward threat-based reasoning—in the same way that entities handle many other types of risk.

The majority of AI cybersecurity efforts tend toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes.

Share on Twitter

In addition to bypassing some stickier issues that come with level-based evaluations (What if I do everything for a given level except one element? What if an element of this level doesn't exist in my organization? What if I do them all, but poorly?), the risk-based worldview recognizes that the presence of a thinking adversary and the ever-evolving technological landscape results in a shifting environment that defies labels and levels. As new tactics and technologies emerge, so too do new dynamics as both defense and offense optimize around these lines in the sand.

When a new equilibrium emerges it is rarely for the benefit of defense, as criteria become checklists rather than aspirations and defense often loses out against an offense that sees these criteria as the playbook for how to bypass security. This is not to say that risk-based approaches don't have their own issues, but when executed as intended they provide a context-based responsiveness and practice of reevaluation that does not exist in these prior, level-focused approaches.

Threats are ordered. The use of terms such as script-kiddies and Advanced Persistent Threats (APT) (and various associated labels) have become so ubiquitous that they are often used to describe specific cyber threats, rather than the simplifications they represent. Encapsulated in this nomenclature is the idea that a script-kiddie is strictly lesser than an APT, with the latter commanding more knowledge and resources than the former. While in the broadest sense this is often true, it fundamentally misunderstands the goal of threat modeling—the practice by which threats are defined for the purpose of motivating defender action.

In his seminal book on the topic, security expert Adam Shostack opens with the famous George Box quote reminding that all models are wrong, but some are useful, emphasizing that the purpose of a threat-modeling exercise is not to exhaustively enumerate possible attacks, but to focus defense on the most pressing issues. Coupled with the prior fallacy, the implication is that setting a security level based on an arbitrary model of a threat actor leads to a defense designed to stand up against an imagined adversary that doesn't really exist (under a questionable assumption that efforts provide the security imagined, per the next fallacy).

An often-overlooked but critical reality of cyberspace is that attackers are themselves constrained by resources (PDF). There are not bonus points for coolness or style (excepting perhaps those attacks intending to convey a message through the act of attacking), and attackers consistently use N-days (known vulnerabilities) and deploy well-worn tools to get their job done. The differences between script-kiddies and APTs are in the focus of the attacker and the opportunities a given system presents.

Reductive models that focus on a caricature of an attacker, rather than a reasoned analysis of various vulnerabilities in context, subvert the primary reason for threat models in the first place: to inform risk, direct scarce resources, and respond to the environment. Relative to the defense, it matters little if the attacker is truly a nation state or a lone actor who just got lucky. Rather, it is the role of focus in attacker threat models and effective allocation of defender resources that are often the determining factors in the types of attack a system is actually likely to face. While it is easy to imagine what different types of adversary can do, the relevance of that to a given system is what separates informed security from uninformed FUD (fear, uncertainty, and doubt).

Cybersecurity is distinct. One of the greatest advancements made in cybersecurity over the past few decades is the broader acceptance that the state of the battlefield in cyberspace can be affected. The emergence of system security engineering as a discipline and efforts to bring it into the consciousness of those building and protecting systems, such as CISA's Secure by Design Initiative, emphasize that “shifting left,” reducing vulnerability, and considering security as a lifecycle system property is essential to defendable and robust systems. This realization has laid bare many realities that had long been ignored:

Despite these hard-won lessons developed over decades of software and security engineering practice, some have chosen to treat cybersecurity as an additional, add-on task—and act as if model weights, algorithms, or development pipelines in AI are somehow fundamentally different from the data and processes cybersecurity has been concerned with throughout its history. Such a view does a disservice to AI practitioners who are left to learn the lessons of security engineering anew. This has played out in other fields, such as the Internet of Things and mobile technologies, which have been slow to incorporate these lessons and, as a result, have struggled with the security of emergent systems.

There's a need for responsibility, reason and high standards that is currently absent in much of AI cybersecurity research.

Share on Twitter

Which leads to perhaps the most fallacious element of the current dialog:

AI cybersecurity is new. Naturally, as with any new technology or application, AI brings new elements to the field: Model poisoning and prompt engineering attacks present new challenges. However, the fundamentals of data security and input validation are well understood. The foundations of cybersecurity remain unchanged, rooted in principles such as the triad of confidentiality, integrity, and availability. At their heart, AI systems are comprised of software and hardware that retain these concerns and are not fundamentally different than that which has been target of security initiatives for years, rendering the lessons identified above and those innumerable lessons not covered here relevant to AI cybersecurity. And yet, time and again, some have chosen the path of creating new frameworks or regimes from whole cloth, ignoring the research and analysis that has gone into the broader ecosystem for the sake of labeling a framework “AI” and with the implication it is somehow more relevant to solving what is essentially a security engineering problem.

For its part, the National Institute of Standards and Technology (NIST) has engaged in developing an AI Risk Management Framework (RMF), designed to address many of the fallacies described here by extending the modern approach to cybersecurity to AI developments; and the Open Worldwide Application Security Project has adapted its highly successful efforts for identifying common cyberattacks to those that are likely to occur in AI. The need to bake in cybersecurity early has been widely recognized (PDF), even if the practices to do so are new to some.

Just as the Department of Defense adaptation of the NIST RMF codified this thinking for military systems so too will it take effort to adapt these principles for the space of AI development, from frontier labs to the battlefield. While AI and its associated technologies naturally create new attack vectors and may increase the considerations that must be made in determining risk, the challenges of AI cybersecurity require approaches that capture the breadth and depth of cybersecurity understanding to date rather than engaging in the fallacies described here. It will be important to engage with the problem of AI cybersecurity by:

  • Embracing and refining risk-based methods, such as the NIST initiatives, in ways that support the differences in context and security needs of AI systems and recognize their unique challenges. This must be informed with the depth of knowledge coming from decades of cybersecurity successes—and failures. Much more work is needed here, just as the RMF and Cybersecurity Framework require the adaptation of these concepts to new domains and applications through the use of profiles and mappings.
  • Developing meaningful, well-defined, and well-supported threat models for AI that capture the nuance and challenges presented by AI systems as they exist both today and in the future (see, for example, Berryville Institute of Machine Learning) without falling back on inaccurate and under-defined concepts that have led to irrational decisionmaking in the past. Qualitative and quantitative measurements are essential to answering the question “Am I secure?” as the necessary predicate is “Against what?”
  • Integrating cybersecurity into the AI workflow, rather than as an add-on to be included where possible. Understanding how the economics of information security, the roles and limitations of different technologies and practices, and the impact of security processes on system goals manifest in AI will be essential. Creating the necessary basis for decisionmaking will require setting aside guesswork and pontification for reasoned analysis and understanding, informed by the state of cybersecurity rather than in ignorance of it.

Unfortunately, as in most military-relevant advanced technology areas, the stakes are high and time is not an ally. Decisions made now in the development of frontier systems will be increasingly hard to dislodge as they become entrenched, and ill-informed policy can lead to high costs both monetarily and in terms of the opportunity lost. Most critically, poorly conceived policy can impair the very security it claims to support by both locking in unsustainable postures and by poisoning the well for better guidance that comes from real research. There's a need for responsibility, reason, and high standards that is currently absent in much of AI cybersecurity research.

More About This Commentary

Chad Heitzenrater is a senior information scientist at the nonprofit, nonpartisan RAND Corporation.

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.