Cover: Examining the landscape of tools for trustworthy AI in the UK and the US

Examining the landscape of tools for trustworthy AI in the UK and the US

Current trends, future possibilities, and potential avenues for collaboration

Published May 14, 2024

by Salil Gunashekar, Henri van Soest, Michelle Qu, Chryssa Politi, Maria Chiara Aquilino, Gregory Smith

Download eBook for Free

Full Document

FormatFile SizeNotes
PDF file 3.6 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Executive Summary

FormatFile SizeNotes
PDF file 0.7 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Download Support Files

Spreadsheet

FormatFile SizeNotes
zip file 0.2 MB

The file(s) provided above are ZIP-formatted archives, which most modern systems can natively unpack. If your computer does not unpack the archive when you double-click it, you may need to use a separate decompression program such as UnZip.

Research Questions

  1. What does the landscape of tools for trustworthy AI look like in the UK and the US?
  2. What practical actions should be considered looking ahead to future UK–US alignment and collaboration on tools for trustworthy AI?

Over the years, there has been a proliferation of frameworks, declarations and principles from various organisations around the globe to guide the development of trustworthy artificial intelligence (AI). These frameworks articulate the foundations for the desirable outcomes and objectives of trustworthy AI systems, such as safety, fairness, transparency, accountability and privacy. However, they do not provide specific guidance on how to achieve these objectives, outcomes and requirements in practice. This is where tools for trustworthy AI become important. Broadly, these tools encompass specific methods, techniques, mechanisms and practices that can help to measure, evaluate, communicate, improve and enhance the trustworthiness of AI systems and applications.

Against the backdrop of a fast-moving and increasingly complex global AI ecosystem, this study mapped UK and US examples of developing, deploying and using tools for trustworthy AI. The research also identified some of the challenges and opportunities for UK–US alignment and collaboration on the topic and proposes a set of practical priority actions for further consideration by policymakers. The report's evidence aims to inform aspects of future bilateral cooperation between the UK and the US governments in relation to tools for trustworthy AI. Our analysis also intends to stimulate further debate and discussion among stakeholders as the capabilities and applications of AI continue to grow and the need for trustworthy AI becomes even more critical.

Key Findings

  • The landscape of trustworthy AI in the UK and the US is complex and multifaceted. It has been evolving and is moving from principles to practice, with high-level guidelines increasingly being complemented by more specific, practical tools.
  • Indicative of a potentially fragmented landscape, we identified over 230 tools for trustworthy AI.
  • The landscape of tools for trustworthy AI in the US is more technical in nature, while the UK landscape is observed to be more procedural.
  • Compared to the UK, the US has a greater degree of involvement of academia in the development of tools for trustworthy AI.
  • Large US technology companies are developing wide-ranging toolkits to make AI products and services more trustworthy.
  • Some non-AI companies are developing their own internal guidelines on AI trustworthiness to ensure they comply with ethical principles.
  • There is limited evidence about the formal assessment of tools for trustworthy AI.
  • The development of multimodal foundation models has increased the complexity of developing tools for trustworthy AI.

Recommendations

  • Action 1: Link up with relevant stakeholders to proactively track and analyse the landscape of tools for trustworthy AI in the UK, the US and beyond.
  • Action 2: Systematically capture experiences and lessons learnt on tools for trustworthy AI, share those insights with stakeholders, and use them to anticipate potential future directions.
  • Action 3: Promote the consistent use of a common vocabulary for trustworthy AI among stakeholders in the UK and the US.
  • Action 4: Encourage the inclusion of assessment processes in the development and use of tools for trustworthy AI to gain a better understanding of their effectiveness.
  • Action 5: Continue to partner and build diverse coalitions with international organisations and initiatives, and to promote interoperable tools for trustworthy AI.
  • Action 6: Join forces to provide resources such as data and computing power to support and democratise the development of tools for trustworthy AI.

Research conducted by

The research described in this report commissioned by the British Embassy Washington via the UK Foreign, Commonwealth and Development Office (FCDO) and the UK Department for Science, Innovation and Technology (DSIT) and conducted by RAND Europe.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.