The U.S. Department of Homeland Security (DHS) just released its 2024 Artificial Intelligence (AI) Roadmap (PDF), but for AI applications to enhance operations as the department envisions, the roadmap needs to include real-world testing and experimental efforts.
The DHS has proposed a collaborative, whole-of-government strategy for the development and adoption of initiatives for a range of AI applications. Despite plans to engage partners across the government, private sector, and academia, the DHS AI Roadmap does not include frontline operators' perspectives on how to best adopt AI-enabled tools and capabilities to execute their operational missions. Barriers resulting from operator perspectives and experiences can prevent a technology's full potential from being realized. The roadmap will not succeed unless efforts to overcome these barriers are included.
The rapid pace of AI development and integration is leading to growing concerns about cultural, ethical, and privacy issues. Perceptions that the advancements of AI will threaten traditional roles and benefit only certain groups often increases technological tensions. Likewise, such innovations that have potential to change cultural identities greatly intensify social concerns. Operators' perspectives around these issues can influence barriers, drive increased reluctance, and hinder adoption of AI.
The rapid pace of AI development and integration is leading to growing concerns about cultural, ethical, and privacy issues.
Share on TwitterFor example, operators may be wary of AI-enabled surveillance or screening systems due to unintended biases and potential misattribution or misidentification of individuals or cargo. This hesitation on the part of operators may be exacerbated due to a lack of understanding about what data is being used to train AI capabilities or uncertainty regarding how AI-enabled tools will function during critical or time-sensitive scenarios. Issues around AI transparency exacerbate barriers to adoption because their opacity prevents operators from understanding how AI systems arrive at decisions. Instead, operators might question how different factors are weighed and whether AI system output is trustworthy.
Additional factors that contribute to adoption barriers include cultural and organizational issues, cost constraints, and regulatory uncertainties. The perceived threat to personal autonomy that AI systems pose can make new AI applications intimidating to their intended operators.
A deeper understanding of operator barriers is required to ensure new AI applications enhance the department's missions as intended. The DHS plan describes two avenues that we believe the department could leverage for building this understanding. The first is DHS's pilot programs, and the second is DHS's planned federated testbed.
The pilot programs allow for test and evaluation of new AI applications with operators across the enterprise and in the real world. DHS has conducted these pilots since 2015 and upcoming trials are planned to improve training of U.S. Citizenship and Immigration Services officers, assist Federal Emergency Management Agency stakeholders in the development of mitigation plans, and enhance Homeland Security Investigations officers' investigative processes.
DHS is partnering with experts during the pilot programs to inform department-wide policies on AI governance, but the AI roadmap does not describe targeted efforts to understand the barriers that may inhibit operators from adopting and integrating the technology into their workflows.
The pilot programs are a golden opportunity to engage directly with frontline operators. Operator data could be collected to examine the presence of barriers and their trends across the duration of a pilot program. Data collection methods might include surveys of operators, focus groups with operators piloting a specific AI application, observational studies in the operational environment, or in-depth interviews with individual operators.
In addition to using pilot programs to study operator barriers, DHS can learn from operators in an experimental setting. Notably, the DHS roadmap (PDF) also describes how the Science & Technology Directorate is planning to create a federated AI testbed. This testbed will provide independent assessment services for operators across DHS components and the homeland security enterprise.
Testbeds allow for exploration and end-to-end testing of new technologies in a safe, non-consequential environment that approximates real-world conditions. Testbeds allow for the design of studies that can target specific research questions and elicit direct feedback from particular users or groups of interest. Insights learned can then be used to help developers better align technology advancements with operator needs and capabilities and enhance technology adoption.
With these benefits in mind, DHS has an opportunity to design a testbed that centers frontline operators within their test and evaluation process. By doing so, DHS would increase the likelihood that operators are able to adopt new AI applications. Existing federated testbeds have demonstrated great value in engaging operators early and often in the technology development process. DHS could look to these existing testbeds to learn about their approaches for integrating operators successfully.
For example, the National Oceanic and Atmospheric Administration (NOAA) Testbed Program is a demonstrable leader in user-centered design approaches for technology test and evaluation. These testbeds provide collaborative settings for researchers and operators to come together in one space, produce new knowledge, and examine cutting-edge technology. The NOAA testbed approach has been key to the agency's successful implementation of innovations that enhance their mission of protecting life and property.
Together, DHS's pilot programs and planned testbed can deepen understanding of operator barriers to AI adoption.
Share on TwitterTogether, DHS's pilot programs and planned testbed can deepen understanding of operator barriers to AI adoption. With this understanding, researchers can then better develop and measure impacts of mitigation strategies for overcoming operator barriers and maximizing the benefits of AI. Mitigation strategies might include developing and deploying AI-specific education and training to operators, building transparency and accountability through measures such as explainable AI, ethical guidelines, and regulatory oversight, demonstrating how AI augments operator processes rather than replaces the operator, and utilizing communication and outreach to address operator concerns and misconceptions about AI.
A multifaceted approach that maximizes the possibilities of pilot programs and testbed studies could be game changing for DHS operators and smooth the way for the kind of enhanced AI operations the department envisions. Now is the time to understand and act on operators' perspectives and experiences to ensure new AI applications reach their full potential within the department.
Katie A. Wilson is an associate policy researcher, Jody Chin Sing Wong is an associate policy researcher and Eric Landree is a senior engineer at RAND, a nonprofit, nonpartisan research institution.