Through both deep institutional knowledge and high-quality, intersectional research capabilities, RAND is uniquely equipped to help the Department of Homeland Security and the Homeland Security Enterprise meet the threats and assess the opportunities posed by AI and machine learning to keep the American homeland safe.
The Challenge
The explosive growth of AI tools—machine learning systems, generative large language models, and, someday, general artificial intelligence—expose a new frontier of threats and opportunities for the United States. Emerging technologies will have effects across every Department of Homeland Security mission, from countering terrorism to securing cyberspace to preserving American economic security.
Photo by LeoPatrizi/Getty Images
RAND's History
For nearly eight decades, RAND has played a pivotal role in advising government agencies about new technologies: in 1946, it released its first report on the potential design, performance, and use of manmade satellites; in the 1970s, RAND explored the previously unthinkable—unmanned aerial vehicles; and in the 1980s, RAND designed the building blocks for today's internet.
Mission Support
Artificial Intelligence (AI) and Machine Learning (ML) can increase the speed, accuracy, and convenience of many of the Department of Homeland Security's core missions. But leaders are rightfully cautious of the potential risks associated with the use of emerging technologies—discriminatory results, a lack of transparency and oversight, and the infringement of privacy and civil liberties.
Researchers from RAND have been exploring the use of emerging technologies by DHS and other government agencies for the past several years. A suite of ongoing research projects are examining future uses of AI and ML, along with public perceptions about emerging technologies. Other core RAND capabilities in AI/ML include:
Applying machine learning for prediction, classification, and anomaly detection
Tailoring large language models for security applications
Evaluating machine learning systems across the lifecycle
Developing policies related to AI deployment, privacy, and risks
Educating policymakers about AI, machine learning methods, and their implications
On this page, you'll find a sampling of recent work on AI and ML curated specifically for the homeland security community. To view RAND's full archive of work on AI and ML, visit RAND's Artificial Intelligence Topic Page.
The U.S. Department of Homeland Security has deployed emerging technologies that affect the American public, such as face-recognition technology, 5G network technology, and counter–unmanned aircraft systems. Public perception is an essential element that can help identify the risks and benefits of these technologies.
Among a broad set of technologies, artificial intelligence (AI) stands out for both its rate of progress and its scope of applications. How does AI affect national security and U.S. competitiveness? And what actions could national security organizations take in response?
Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools like large language models could bridge that gap. And given the rapid evolution of AI and biotechnology, governmental capacity to understand or regulate them is limited.
The RAND National Security Research Division hosted a moderated panel discussion to examine the key role technical talent will play in DoD’s digital transformation, the factors behind the department’s current shortage of technical talent, and the steps DoD can take to address recruiting and retention barriers.
The confluence of machine learning and gene editing has the potential to transform health care, agriculture, national security, and many other fields. How can policymakers minimize the risks and maximize the benefits?
As artificial intelligence algorithms become incorporated into more decision processes that affect individuals' welfare and well-being, public perceptions of the technology will have many implications, including for jury judgments about algorithmic liability and support for AI regulation.
Machine learning has great potential to enable military decisionmaking at the operational level of war but only when paired with human analysts who possess detailed understanding of the context behind a given problem.
Using generative artificial intelligence technology, U.S. adversaries can manufacture fake social media accounts that seem real. These accounts can be used to advance narratives that serve the interests of those governments and pose a direct challenge to democracies. U.S. government, technology, and policy communities should act fast to counter this threat.
Ensuring that technologies deployed by the government serve the public interest requires an accurate assessment of their benefits and risks as well as the public's trust that these rapidly advancing technologies are used responsibly. Public perception is important, including the perspectives of different demographic groups.