Cover: Emerging Technology and Risk Analysis

Emerging Technology and Risk Analysis

Artificial Intelligence and Critical Infrastructure

Published Apr 2, 2024

by Daniel M. Gerstein, Erin N. Leidy

Download Free Electronic Document

FormatFile SizeNotes
PDF file 0.3 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Questions

  1. What is the technology availability for AI applications in critical infrastructure in the next ten years?
  2. How will science and technology maturity; use case, demand, and market forces; resources; policy, legal, ethical, and regulatory impediments; and technology accessibility of critical infrastructure applications change during this ten-year period?
  3. What risks and scenarios (consisting of threats, vulnerabilities, and consequences) is AI likely to present for critical infrastructure applications in the next ten years?

This report is one in a series of analyses on the effects of emerging technologies on U.S. Department of Homeland Security (DHS) missions and capabilities. As part of this research, the authors were charged with developing a technology and risk assessment methodology for evaluating emerging technologies and understanding their implications within a homeland security context. The methodology and analyses provide a basis for DHS to better understand the emerging technologies and the risks they present.

This report focuses on artificial intelligence (AI), especially as it relates to critical infrastructure. The authors draw on the literature about smart cities and consider four attributes in assessing the technology: technology availability and risks and scenarios (which the authors divided into threat, vulnerability, and consequence). The risks and scenarios considered in this analysis pertain to AI use affecting critical infrastructure. The use cases could be either for monitoring and controlling critical infrastructure or for adversaries employing AI for use in illicit activities and nefarious acts directed at critical infrastructure. The risks and scenarios were provided by the DHS Science and Technology Directorate and the DHS Office of Policy. The authors compared these four attributes across three periods: short term (up to three years), medium term (three to five years), and long term (five to ten years) to assess the availability of and risks associated with AI-enabled critical infrastructure.

Key Findings

  • AI is transformative technology and will likely be incorporated broadly across society—including in critical infrastructure.
  • AI will likely be affected by many of the same factors as other information age technologies, such as cybersecurity, protecting intellectual property, ensuring key data protections, and protecting proprietary methods and processes.
  • The AI field contains numerous technologies that will be incorporated into AI systems as they become available. As a result, AI science and technology maturity will be based on key dependencies in several essential technology areas, including high-performance computing, advanced semiconductor development and manufacturing, robotics, machine learning, natural language processing, and the ability to accumulate and protect key data.
  • To place AI in its current state of maturity, it is useful to delineate three AI categories: artificial narrow intelligence (ANI), artificial general intelligence, and artificial super intelligence. By the end of the ten-year period of this analysis, the technology will very likely still only have achieved ANI.
  • AI will present both opportunities and challenges for critical infrastructure and the eventual development of purpose-built smart cities.
  • The ChatGPT-4 rollout in March 2023 provides an interesting case study for how these AI technologies—in this case, large-language models—are likely to mature and be integrated into society. The initial rollout illustrated a cycle—development, deployment, identification of shortcomings and other areas of potential use, and rapid updating of AI systems—that will likely be a feature of AI.

This research was sponsored by the U.S. Department of Homeland Security Science and Technology Directorate and conducted by the Management, Technology, and Capabilities Program of RAND Homeland Security Research Division.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.