Circle of Trust: Six Steps to Foster the Effective Development of Tools for Trustworthy AI in the UK and the U.S.

commentary

Aug 2, 2024

Digital hand holding digital tools, photo by sankai/Getty Images

Photo by sankai/Getty Images

This commentary originally appeared on OECD.AI on June 20, 2024.

AI subtly permeates people's daily lives—filtering email and helping navigation with apps—but it also manifests in more remarkable ways, such as disease identification and assisting scientists in tackling climate change.

Alongside this great potential, AI also introduces considerable risks. Fairness, reliability, accountability, privacy, transparency, and safety are critical concerns that prompt a crucial question: Can AI be trusted? This question has sparked debates from London to Washington and Brussels to Addis Ababa as policymakers wrestle with the unprecedented challenges and opportunities ushered in by AI.

AI systems and applications are considered trustworthy when they can be reliably developed and deployed without adverse consequences for individuals, groups, or broader society. A multitude of frameworks, declarations, and principles have emerged from various organisations worldwide to guide the development of more trustworthy AI in a responsible and ethical way. This includes the European Commission's Ethics Guidelines for Trustworthy AI, the OECD AI Principles, and UNESCO's Recommendation on the Ethics of AI. These frameworks helpfully outline various foundational elements for trustworthy AI systems' desired outcomes and goals, similar to the critical concerns listed above. However, they offer little specific guidance on practically achieving these objectives, outcomes, and requirements related to AI trustworthiness.

AI systems and applications are considered trustworthy when they can be reliably developed and deployed without adverse consequences for individuals, groups, or broader society.

Share on Twitter

That's where tools for trustworthy AI come in. These tools help bridge the gap between AI principles and their practical implementation, providing resources to ensure AI is developed and used responsibly and ethically. Broadly speaking, these tools comprise specific methods, techniques, mechanisms, and practices that can help measure, evaluate, communicate, improve, and enhance the trustworthiness of AI systems and applications.

The Transatlantic Landscape

A RAND Europe study published in May 2024 assessed the present state of tools for trustworthy AI in the United Kingdom and the United States, two prominent jurisdictions deeply engaged in developing the AI ecosystem. Commissioned by the British Embassy in Washington, the study pinpoints examples of tools for trustworthy AI in the United Kingdom and the United States, identifying hurdles, prospects, and considerations for future alignment. The research took a mixed-methods approach, including document and database reviews, expert interviews, and an online crowdsourcing exercise.

Moving from Principles to Practice

The landscapes of tools for trustworthy AI in the United Kingdom and the United States are complex and evolving. High-level guidelines are increasingly complemented by more specific, practical tools. The RAND Europe study revealed a fragmented and growing landscape, identifying over 200 tools for trustworthy AI. The United States accounted for 70 percent of these tools, the United Kingdom for 28 percent, and the remainder represented a collaboration between U.S. and UK organisations. Drawing on the classification used by OECD.AI, the U.S. landscape leaned more towards technical tools, which relate to the technical dimensions of the AI model, while the United Kingdom produced more procedural tools that offer operational guidance on how AI should be implemented.

Interestingly, U.S. academia was more involved in tool development than its UK counterpart. Large U.S. tech companies are developing wide-ranging toolkits to make AI products and services more trustworthy, while some non-AI companies have their own internal guidelines on AI trustworthiness. However, there's limited evidence about the formal assessment of tools for trustworthy AI. Furthermore, multimodal foundation models that combine text and image processing capabilities make developing tools for trustworthy AI more complex.

Actions to Shape the Future of AI

The research led to a series of considerations for policymakers and other stakeholders in the United Kingdom and the United States. When combined with other activities and partnership frameworks—both bilateral and international, these practical actions could cultivate a more interconnected, harmonised, and nimble ecosystem between the United Kingdom and the United States:

  • Connect with relevant stakeholders to proactively track and analyse the landscape of tools for trustworthy AI in the United Kingdom, the United States, and beyond.
  • Systematically document experiences and lessons learned on tools for trustworthy AI, share those insights with stakeholders, and use them to anticipate potential future directions.
  • Promote consistent and shared vocabulary for trustworthy AI among UK and U.S. stakeholders.
  • Include assessment processes in developing and using tools for trustworthy AI to understand their effectiveness better.
  • Build diverse coalitions with international organisations and initiatives and promote interoperable tools for trustworthy AI.
  • Join forces to provide resources such as data and computing power to support and democratise the development of tools for trustworthy AI.

Figure 1: Practical considerations for UK and U.S. policymakers to help build a linked-up, aligned, and agile ecosystem

Actions to shape the future of AI

Engage and collaborate

Innovate and anticipate

Monitor and discover

Analyse and understand

Share and communicate

Learn and evaluate

ACTION 1: Link up with relevant stakeholders to proactively track and analyse the landscape of tools for trustworthy AI in the United Kingdom, the United States, and beyond.

ACTION 2: Systematically capture experiences and lessons learnt on tools for trustworthy AI, share those insights with stakeholders and use them to anticipate potential future directions.

ACTION 3: Promote the consistent use of a common vocabulary for trustworthy AI among stakeholders in the United Kingdom and the United States.

ACTION 4: Encourage the inclusion of assessment processes in the development and use of tools for trustworthy AI to gain a better understanding of their effectiveness.

ACTION 5: Continue to partner and build diverse coalitions with international organisations and initiatives, and to promote interoperable tools for trustworthy AI.

ACTION 6: Join forces to provide resources such as data and computing power to support and democratise the development of tools for trustworthy AI.

Potential stakeholders to involve across the different actions: Department for Science, Innovation and Technology (including the Responsible Technology Adoption Unit and UK AI Safety Institute); Foreign, Commonwealth & Development Office (including the British Embassy Washington); AI Standards Hub; UK Research and Innovation; AI Research Resource; techUK; Evaluation Task Force in the UK; Government Office for Science; National Institute of Standards and Technology; U.S. AI Safety Institute; National Science Foundation; National Artificial Intelligence Research Resource; U.S. national laboratories; Organisation for Economic Co-operation and Development; European Commission; United Nations (and associated agencies); standards development organisations.

Source: RAND Europe analysis

The future of AI is not just about trustworthy technology; it's about trust, a subjective concept that could vary widely between individuals, contexts, and cultures.

Share on Twitter

A Future of Trust?

The actions we suggest are not meant to be definitive or exhaustive. They are topics for further discussion and debate among relevant policymakers and, more generally, stakeholders in the AI community who want to make AI more trustworthy. These actions could inform and support the development of a robust consensus on tools, which would be particularly beneficial for future discussions about broader AI oversight.

The future of AI is not just about trustworthy technology; it's about trust, a subjective concept that could vary widely between individuals, contexts, and cultures. Ideally, more trustworthy AI should lead to greater trust, but this is not always the case. Conversely, a system could be trusted without being trustworthy if its shortcomings are not apparent to its users. This personal and global issue requires careful consideration and collaborative efforts from policymakers, researchers, industry leaders, and civil society worldwide.

More About This Commentary

Salil Gunashekar is deputy director of the Science and Emerging Technology Research Group at RAND Europe.Henri van Soest is a senior analyst in the Defence and Security Research Group at RAND Europe.

Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.