Examining the landscape of tools for trustworthy AI in the UK and the US

A group of skydivers in the sky linking hands to form a circle

Photo by Anton Podoshvin/Adobe Stock

What is the issue?

Artificial intelligence (AI) has become a critical area of interest for stakeholders around the globe and there have been many discussions and initiatives to ensure that AI is developed and deployed in a responsible and ethical manner. Over the years, there has been a proliferation of frameworks, declarations and principles from various organisations around the globe to guide the development of trustworthy AI. These frameworks articulate the foundations for the desirable outcomes and objectives of trustworthy AI systems, such as safety, fairness, transparency, accountability and privacy. However, they do not provide specific guidance on how to achieve these objectives, outcomes and requirements in practice. This is where tools for trustworthy AI become important. Broadly, these tools encompass specific methods, techniques, mechanisms and practices that can help to measure, evaluate, communicate, improve and enhance the trustworthiness of AI systems and applications.

How did we help?

The aim of this study was to examine the range of tools designed for the development, deployment and use of trustworthy AI in the United Kingdom and the United States. The study identified challenges, opportunities and considerations for policymakers for future UK–US alignment and collaboration on tools for trustworthy AI. The research was commissioned by the British Embassy Washington via the UK Foreign, Commonwealth and Development Office (FCDO) and the UK Department for Science, Innovation and Technology (DSIT).

We used a mixed-methods approach to carry out the research. This involved a focused scan and review of documents and databases to identify examples of tools for trustworthy AI that have been developed and deployed in the UK and the US. We carried out interviews with experts connected to some of the identified tools and with wider stakeholders with understanding of tools for trustworthy AI. In parallel, we also conducted an online crowdsourcing exercise with a range of experts to collect additional information on selected examples of tools.

What did we find?

  • The landscape of trustworthy AI in the UK and the US is complex and multifaceted. It has been evolving and is moving from principles to practice, with high-level guidelines increasingly being complemented by more specific, practical tools.
  • Indicative of a potentially fragmented landscape, we identified over 230 tools for trustworthy AI.
  • The landscape of tools for trustworthy AI in the US is more technical in nature, while the UK landscape is observed to be more procedural.
  • Compared to the UK, the US has a greater degree of involvement of academia in the development of tools for trustworthy AI.
  • Large US technology companies are developing wide-ranging toolkits to make AI products and services more trustworthy.
  • There is limited evidence about the formal assessment of tools for trustworthy AI.
  • Some non-AI companies are developing their own internal guidelines on AI trustworthiness to ensure they comply with ethical principles.
  • The development of multimodal foundation models has increased the complexity of developing tools for trustworthy AI.

What can be done?

We propose a series of considerations for key decision makers involved in the tools for trustworthy AI ecosystem in the UK and the US. We offer these suggestions as a set of cross-cutting practical actions. When taken together and combined with other activities and partnership frameworks in the wider context of AI regulatory policy debates and collaboration, these actions could potentially help contribute to a more linked-up, aligned and agile ecosystem between the UK and the US.

  • Action 1: Link up with relevant stakeholders to proactively track and analyse the landscape of tools for trustworthy AI in the UK, the US and beyond.
  • Action 2: Systematically capture experiences and lessons learnt on tools for trustworthy AI, share those insights with stakeholders, and use them to anticipate potential future directions.
  • Action 3: Promote the consistent use of a common vocabulary for trustworthy AI among stakeholders in the UK and the US.
  • Action 4: Encourage the inclusion of assessment processes in the development and use of tools for trustworthy AI to gain a better understanding of their effectiveness.
  • Action 5: Continue to partner and build diverse coalitions with international organisations and initiatives, and to promote interoperable tools for trustworthy AI.
  • Action 6: Join forces to provide resources such as data and computing power to support and democratise the development of tools for trustworthy AI.

Read the full study