Exploring ways to regulate and build trust in AI

Sikov/Adobe Stock
Researchers brought together evidence on the use of labelling initiatives, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of artificial intelligence systems.
What is the issue?
Artificial intelligence (AI) is recognised as a strategically important technology that can contribute to a wide array of societal and economic benefits. However, it is also a technology that may present serious risks, challenges and unintended consequences.
Within this context, trust in AI systems is necessary for the broader use of these technologies in society. It is therefore vital that AI-enabled products and services are developed and implemented responsibly, safely and ethically.
How did we help?
RAND Europe conducted research to bring together evidence on the use of labelling initiatives, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of AI systems.
Through a literature review, a crowdsourcing exercise with experts and a series of interviews, we identified and analysed such mechanisms across diverse geographical contexts, sectors, AI applications and stages of development.
The report sets out common themes, highlights notable divergences between initiatives and outlines the opportunities and challenges that are associated with developing and implementing them.
We also suggest key learnings that stakeholders can take forward to understand the potential implications for future action when designing and implementing voluntary, self-regulatory mechanisms.
Follow-up virtual roundtable discussion
We also hosted a virtual roundtable in April 2022 to discuss the findings of the research. The event brought together policymakers, researchers and industry representatives. Taking a forward-looking approach, participants considered the practicality of developing various tools and discussed a range of considerations that are of relevance against the backdrop of the European Commission’s draft proposals for an EU-wide regulatory framework on AI (the ‘AI Act’). The views and ideas generated at the roundtable have been written up in a short report to stimulate further debate and inform thinking as policy around these issues continues to develop rapidly.
What did we find?
Initiatives
- There are two broad categories of self-regulatory initiatives:
- Mechanisms or tools that define a certain standard for AI applications and outline a set of criteria against which that standard is assessed, generally through an audit process, e.g. labels and certification schemes; kite marks, trust marks and quality marks; and seals.
- Statements that set out and define certain requirements or principles that should be followed by organisations developing or procuring AI applications to ensure the safe and ethical development and use of these systems, e.g. codes of conduct or codes of ethics.
- The initiatives span different stages of development, from early stage, proposed mechanisms to operational examples, but many have yet to gain widespread acceptance and use.
- The different types of mechanisms share intended aims and outcomes:
- To increase trust in AI applications by signalling the technical reliability and quality of an AI application.
- Potentially to increase competition by creating transparency and comparability of AI applications that are available on the market.
- To help organisations developing AI applications understand how to conform to emerging standards and good practice for a technology such as AI.
- Many initiatives assess AI applications against ethical and legal criteria that emphasise safety, human rights and societal values, and are often based on principles from existing high-level ethical frameworks.
- The different mechanisms have been developed by a broad and diverse range of organisations, such as public, private and third sector, and across multiple countries as well as wider regions and global levels.
- Labelling initiatives and codes of conduct assess the development and use of AI across multiple sectors and industries, with some focusing on a specific sector.
Opportunities
- The voluntary application of codes of conduct and labels has the potential to uphold the ethical development and use of AI products and services.
- Codes of conduct and labels could help to build trust in AI products and services that often struggle with the reputation of being opaque systems.
- AI labelling and codes of conduct could strengthen stakeholder relationships within the AI supply chain as AI is being developed and deployed.
- An AI labelling and certification scheme can act as a benchmarking tool to signal a certain standard for companies and end-users in the market, which could potentially help develop an industry-wide standard.
- AI certificates could help define market standards and potentially increase competitiveness in a global market.
- AI labels and codes of conduct can help reinforce and reintroduce human oversight into a technological process.
Challenges
- The complexity of AI applications raises difficulties in developing and enforcing criteria to assess ethical and legal principles. The societal impact of AI depends on both the technology but also the system’s goals and how they are embedded within an organisation.
- The complexity of AI applications requires the involvement of multiple stakeholders in the design and implementation of assessment.
- There are challenges related to assessing the ethical and legal characteristics of AI applications for different use cases and contexts, depending on the field of application as well as the cultural context in which the AI system operates.
- The potential cost and burden of assessment could result in lack of stakeholder buy-in to schemes, particularly for smaller businesses.
- When designing and rolling out schemes, there are trade-offs between protecting consumers and driving innovation and competition in the market.
- From a regulatory perspective, there are difficulties involved in drawing up labelling schemes and codes of conduct for a rapidly evolving technology like AI.
- There are challenges in ensuring the legitimacy and accountability of initiatives through transparent third-party auditing.
- The profusion of multiple different initiatives presents challenges for companies and consumers and could lead to reduced trust.
- There is a challenge around incentivising the adoption of voluntary, self-regulatory mechanisms.
What are the key learnings?
- Involving an independent and reputable organisation could strengthen trust in an initiative, ensure effective oversight, and promote credibility and legitimacy.
- Actively engaging multiple interdisciplinary stakeholders to integrate a diversity of views and expertise in the design and development of AI self-regulatory tools could increase buy-in and adoption.
- The use of innovative approaches can help to address the perceived costs and burden associated with implementing self-regulatory mechanisms and also provide flexibility and adaptability in assessing AI systems.
- It is important to share learnings, communicate good practice, and for self-regulatory initiatives to be evaluated to track impacts and outcomes over time.
- There is a growing need for coordination and harmonisation of different initiatives to avoid the risk of a fragmented ecosystem and to promote clarity and understanding in the market.
- Rather than taking a one-size-fits-all approach, it will be important to consider using a combination of different self-regulatory tools for diverse contexts and use cases to encourage their voluntary adoption.