Tackling the challenges of algorithm-driven online media services

Young adults using smartphones at university college backyard break

ViewApart/Getty Images

Countering the spread of disinformation is proving to be a significant task for societies around the world.

Researchers from RAND Europe and Open Evidence used state-of-the-art knowledge and data to understand and recommend ways to improve media literacy and online empowerment in response to algorithms used by online media services and platforms.

Background

As the debate surrounding disinformation and accountability intensifies, algorithm-driven media services are being brought to the forefront of discussion. With important questions on how media platforms are shaping thought, it is critical to consider the extent to which citizens are aware of the impacts of algorithms on their content choices and of the validity of the information they receive through social media.

Goals

On behalf of the European Commission, RAND Europe and Open Evidence were commissioned to identify the potential risks of algorithm-driven media services and how to mitigate their impacts. Exploring the interplay between the biases in algorithms and people’s own cognitive biases, the study aimed to:

  • Analyse the issues posed by algorithm-driven media services given the role of media in a well-functioning democracy;
  • Identify the drivers of these issues and their potential consequences in the context of fundamental rights; and
  • Identify potential approaches to tackle the issues associated with algorithm-driven media services and platforms, or mitigate their consequences.

The results of the study have helped to inform EU media policy in their steps to address the emerging challenges of algorithms.

Methodology

A set of quantitative and qualitative methodologies were used to carry out this research, including a literature review, key informant interviews with selected experts, data analysis and stakeholder consultation and engagement.

Findings

  • People’s choices and reaction to content can lead to the viral dissemination and amplification of harmful content.

    Policy initiatives, changes to social media algorithms and media literacy programmes for online users are all methods that have been employed so far to address these challenges but evidence of what works is still scarce. Research has shown that people tend to be unaware of their own cognitive biases and underestimate the extent to which algorithms influence their behaviour on social media platforms.

  • Improving the media literacy of consumers and reducing their vulnerability to disinformation is a necessary part of the solution.

    An approach that also helps people to be aware of their own behaviour while online is worth exploring. The rational and analytical thinking that is necessary for media literacy awareness can fall as people shift into a more reactive and less rational way of thinking while engaging with news content online.

Recommendations

This study proposes three concrete behavioural science experiments to be conducted that would test whether social media platforms could counter cognitive biases and trigger a more analytical type of thinking by online users. This low-key approach would be employed at the point of media consumption — i.e. just before a user is about to click on an article link — and prompt them to pause a moment and consider what it is they are about to share and/or read.


Read the full study

Additional Project Team Member

  • Advait Deshpande
  • Axelle Devaux