Cooperation Could Be Critical to U.S. AI Success

commentary

Feb 20, 2024

Digital image of a lock in a circle with one straight line on the left and many curved lines on the right, photo by MF3d/Getty Images

Photo by MF3d/Getty Images

As the United States looks to reinvigorate its technological edge over its adversaries, one important initiative has been to seek out collaborators from its extensive network of allies and partners to co-develop advanced technologies such as artificial intelligence (AI). Cooperation could be especially important to the development of AI, since bringing in collaborators expands the pool of data available to train these algorithms and deepens the talent pool available to work on them. However, the prospect of co-developing AI applications with other nations faces many challenges as rapid advancements in the technology outpace policies and procedures designed during the industrial age.

To better understand these issues, RAND interviewed 21 experts from six countries to discover which obstacles they believe present the greatest challenges to co-developing AI with the United States. Our interviewees included both serving government officials and policy experts about emerging technologies and defense cooperation; all were granted anonymity so they could speak candidly about these issues. Throughout our discussions, three themes emerged.

Conceptual Problems—“What Does Co-Development of AI Actually Mean?”

The prospect of co-developing AI applications with other nations faces many challenges.

Share on Twitter

Overall, conceptual problems stood out as the barrier most frequently cited by our interviewees. While senior leaders widely agree that AI has the potential to transform military operations, few have identified exactly which concrete use cases currently justify a substantial investment in the technology. Because most potential American collaborators operate with substantially smaller defense budgets compared to the United States, senior leaders need a compelling reason to redirect scarce resources into AI development. Additionally, many interviewees noted the lack of clarity about exactly what co-development of AI would mean in this context. Traditionally, partners in a collaborative defense project can simply be assigned distinct components of the physical object to independently produce; for example, one partner might design and manufacture the tail for an aircraft while other partners focus on the wings or landing gear. However, AI algorithms do not cleanly separate in this manner, resulting in confusion about what “co-development” would mean in practice for an AI project.

Data Problems—Accessibility, Sovereignty, and Interoperability

Additionally, our interviewees expressed concerns that data issues could hinder AI collaborations. Many larger nations promote the concept of “data sovereignty” under which data remains under their control by physically remaining on devices located within their sovereign territory. Interviewees often noted this norm—promoted in part by regulatory mechanisms such as Europe's General Data Protection Regulation—even when discussing data sets that would not contain any regulated data elements such as personally identifiable information or health information. While smaller nations often demonstrated more flexibility on these issues, several interviewees from larger and wealthier nations were concerned that maintaining data sovereignty would be a point of national pride.

Most interviewees also believed that data interoperability—the ability to use data from different data sets together—would prove to be a substantial barrier to collaborative AI projects. Experienced AI practitioners have often noted the difficulty in training AI models on data sets which were never intended to work together. These problems are expected to be especially difficult in the military context given its especially extensive collection of legacy software.

Finally, several interviewees expressed concerns that legal barriers would inhibit collaboration on AI. Several interviewees discussed ways the International Traffic in Arms Regulations (ITAR) had inhibited collaboration with the United States on past co-development projects. In some cases, nations can lose the ability to freely share information originating in their country with their own defense contractors after sharing it with the United States. Interviewees also noted the potential for intellectual property disputes to hinder the co-development of AI given that any AI programs would most likely be reliant on private corporations to develop the AI algorithms.

Help Me Help You

If the Biden administration wants to improve America's ability to collaborate with allies and partners in this space, it should consider three steps. First, it could identify a concrete use case for the co-development of AI. Two of the most promising options would be using AI to enhance cyber defenses or computer vision for the processing of intelligence materials. Regardless of exactly which use case is chosen, helping America's allies move beyond vague and aspirational plans for AI could be a decisive step in the practical development of this technology.

Helping America's allies move beyond vague and aspirational plans for AI could be a decisive step in the practical development of this technology.

Share on Twitter

Second, the Defense Department could eliminate ITAR as a barrier to collaboration by ensuring governments—rather than contractors—maintain ownership of the data used to train AI algorithms. Government agencies have a broad exception to ITAR that allows them to share information with other nations. However, defense contractors do not share this exception—they are required to apply to the State Department for an ITAR exception in order to share information they own regardless of the preferences of the Defense Department. While government ownership of data would not address every issue with export controls, it should prevent allies and partners from feeling punished for sharing data with the United States.

Finally, DoD should consider establishing a secure unclassified cloud computing environment jointly funded and operated with allies and partners. Shared infrastructure would allow the DoD and its collaborators to share computer power and storage resources and it would prevent the duplication of efforts between the United States and its allies. A jointly operated environment could also spur innovation and deepen working relationships between the United States and key allies and partners. Working together side-by-side with key allies and partners around the globe to advance critical technologies like AI could ultimately help the United States maintain its technical edge over global adversaries and ensure the continued preeminence of American military power.


James Ryseff is a senior technical policy analyst at the RAND Corporation.

More About This Commentary

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.