Report
Virtual roundtable on labelling initiatives, codes of conduct and other voluntary mechanisms to build trustworthy artificial intelligence (AI) systems
Jun 9, 2022
This research analysed evidence on the use of labelling schemes, codes of conduct and other self-regulatory mechanisms for the ethical and safe development of AI applications. Through a literature review, crowdsourcing and interviews, we highlight a set of common themes, notable divergences, and examine anticipated opportunities and challenges associated with developing and implementing these tools. We also outline key learnings for the future.
From principles to practice and considerations for the future
Format | File Size | Notes |
---|---|---|
PDF file | 10.1 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Format | File Size | Notes |
---|---|---|
PDF file | 1.6 MB | Use Adobe Acrobat Reader version 10 or higher for the best experience. |
Artificial intelligence (AI) is recognised as a strategically important technology that can contribute to a wide array of societal and economic benefits. However, it is also a technology that may present serious challenges and have unintended consequences. Within this context, trust in AI is recognised as a key prerequisite for the broader uptake of this technology in society. It is therefore vital that AI products, services and systems are developed and implemented responsibly, safely and ethically.
Through a literature review, a crowdsourcing exercise and interviews with experts, we aimed to examine evidence on the use of labelling initiatives and schemes, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of AI applications. We draw out a set of common themes, highlight notable divergences between these mechanisms, and outline anticipated opportunities and challenges associated with developing and implementing them. We also offer a series of topics for further consideration to best balance these opportunities and challenges. These topics present a set of key learnings that stakeholders can take forward to understand the potential implications for future action when designing and implementing voluntary, self-regulatory mechanisms. The analysis is intended to stimulate further discussion and debate across stakeholders as applications of AI continue to multiply across the globe and particularly considering the European Commission's recently published draft proposal for AI regulation.
We identified and analysed a range of self-regulatory mechanisms — such as labelling initiatives, certification schemes, seals, trust/quality marks and codes of conduct — across diverse geographical contexts, sectors and AI applications.
The initiatives span different stages of development, from early stage (and still conceptual) proposed mechanisms to operational examples, but many have yet to gain widespread acceptance and use.
Many of the initiatives assess AI applications against ethical and legal criteria that emphasise safety, human rights and societal values, and are often based on principles that are informed by existing high-level ethical frameworks.
We found a series of opportunities and challenges associated with the design, development and implementation of these voluntary, self-regulatory tools for AI applications.
We outlined a set of key considerations that stakeholders can take forward to understand the potential implications for future action when designing, implementing and incentivising the take-up of voluntary, self-regulatory mechanisms, and to help contribute to the creation of a flexible and agile regulatory environment.
Chapter One
Introduction and overview
Chapter Two
The role of labelling initiatives, codes of conduct and other self-regulatory mechanisms in AI development and use
Chapter Three
Concluding remarks and reflections on the future
Annex A
Methodological approach
Annex B
Longlist of initiatives
Annex C
Detailed descriptions of some of the initiatives
The research described in this report was prepared for Microsoft and conducted by RAND Europe.
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.