The Future of International Scientific Assessments of AI's Risks

Hadrien Pouget, Claire Dennis, Jon Bateman, Robert F. Trager, Renan Araujo, Haydn Belfield, Belinda Cleeland, Malou Estier, Gideon Futerman, Oliver Guest, et al.

ResearchPosted on rand.org Sep 11, 2024Published in: Carnegie Endowment for International Peace website (2024)

Managing the risks of artificial intelligence (AI) will require international coordination among many actors with different interests, values, and perceptions. Experience with other global challenges, like climate change, suggests that developing a shared, science-based picture of reality is an important first step toward collective action. In this spirit, last year the UK government led twenty-eight countries and the European Union (EU) in launching the International Scientific Report on the Safety of Advanced AI.

The UK-led report has accomplished a great deal in a short time, but it was designed with a narrow scope, limited set of stakeholders, and short initial mandate that's now nearing its end. Meanwhile, the United Nations (UN) is now moving toward establishing its own report process, though key parameters remain undecided. And a hodgepodge of other entities—including the Organisation for Economic Co-operation and Development (OECD), the emerging network of national AI Safety Institutes (AISIs), and groupings of scientists around the world—are weighing their own potential contributions toward global understanding of AI.

How can all these actors work together toward the common goal of international scientific agreement on AI's risks? There has been surprisingly little public discussion of this question, even as governments and international bodies engage in quiet diplomacy. Moreover, the difficulty of the challenge is not always fully acknowledged. Compared to climate change, for example, AI's impacts are more difficult to measure and predict, and more deeply entangled in geopolitical tensions and national strategic interests.

To discuss the way forward, Oxford Martin School's AI Governance Institute and the Carnegie Endowment for International Peace brought together a group of experts at the intersection of AI and international relations in July. Drawing from that discussion, six major ideas emerged:

  • No single institution or process can lead the world toward scientific agreement on AI's risks. There are too many conflicting requirements to address within a single framework or institution. Global political buy-in depends on including a broad range of stakeholders, yet greater inclusivity reduces speed and clarity of common purpose. Appealing to all global audiences would require covering many topics, and could come at the cost of coherence. Scientific rigor demands an emphasis on peer-reviewed research, yet this rules out the most current proprietary information held by industry leaders in AI development. Because no one effort can satisfy all these competing needs, multiple efforts should work in complementary fashion.
  • The UN should consider leaning into its comparative advantages by launching a process to produce periodic scientific reports with deep involvement from member states. Similarly to the Intergovernmental Panel on Climate Change (IPCC), this approach can help scientific conclusions achieve political legitimacy, and can nurture policymakers' relationships and will-to-act. The reports could be produced over a cycle lasting several years and cover a broad range of AI-related issues, bringing together and addressing the priorities of a variety of global stakeholders. In contrast, a purely technical, scientist-led process under UN auspices could potentially dilute the content on AI risks while also failing to reap the legitimating benefits of the UN's universalist structure.
  • A separate international body should continue producing annual assessments that narrowly focus on the risks of "advanced" AI systems, primarily led by independent scientists. The rapid technological change, potential scale of impacts, and intense scientific challenges of this topic call for a dedicated process which can operate more quickly and with more technical depth than the UN process. It would operate similarly to the UK-led report, but with greater global inclusion, drawing data from a wider range of sources and within a permanent institutional home. The UN could take this on, but attempting to lead both this report and the above report under a single organization risks compromising this report's speed, focus, and independence.
  • There are at least three plausible, if imperfect candidates to host the report dedicated to risks from advanced AI. The network of AISIs is a logical successor to the UK-led effort, but it faces institutional uncertainties. The OECD has a strong track record of similar work, though it remains somewhat exclusive. The International Science Council brings less geopolitical baggage but has weaker funding structures. Regardless of who leads, all of these organizations—and others—should be actively incorporated into a growing, global public conversation on the science of advanced AI risks.
  • The two reports should be carefully coordinated to enhance their complementarity without compromising their distinct advantages. Some coordination would enable the UN to draw on the independent report's technical depth while helping it gain political legitimacy and influence. However, excessive entanglement could slow or compromise the independent report and erode the inclusivity of the UN process. Promising mechanisms include memoranda of understanding, mutual membership or observer status, jointly running events, presenting on intersecting areas of work, and sharing overlapping advisors, experts, or staff.
  • It may be necessary to continue the current UK-led process until other processes become established. Any new process will take time to achieve stakeholder buy-in, negotiate key parameters, hire staff, build working processes, and produce outputs. The momentum and success of the UK-led process should not be squandered after the first edition is presented at France's AI Action Summit in February.

Topics

Document Details

  • Publisher: Carnegie Endowment for International Peace
  • Availability: Non-RAND
  • Year: 2024
  • Pages: 38
  • Document Number: EP-70620

Research conducted by

This publication is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.