Misinformation, disinformation and hateful extremism during the COVID-19 pandemic
An exploration of the links between hateful extremism and false information identified online interventions and policy responses and suggested policy considerations and areas that would benefit from further exploration.
sanderstock/Adobe Stock
What is the issue?
The COVID-19 pandemic has provided a breeding ground for conspiracy theories, disinformation and hateful extremism. During lockdown and with rising unemployment, more people have been spending time at home and online, with greater exposure to false information and hateful extremist narratives. Forums such as 4Chan and Reddit are hubs for amplifying and spreading disinformation, as are mainstream social media platforms such as Facebook, Twitter and YouTube.
Particularly in the COVID-19 context, it is important to ensure that today’s digital generations are equipped to identify hateful extremism and false narratives in order to build societal resilience. There is a need to consolidate existing research, better understand the evidence base and address gaps to inform primary research, policy planning and decision making.
How did we help?
The Commission for Countering Extremism (CCE) commissioned Ipsos Mori and RAND Europe to undertake a study to examine hateful extremism within society during COVID-19. RAND Europe conducted a literature review which explored the links between hateful extremism and false information and identified associated online interventions and policy responses.
The research team used a Rapid Evidence Assessment (REA), which involved a review of 93 relevant papers across disciplines including psychology, political science, sociology and law.
What did we find?
False information can shape hateful extremist beliefs and behaviours by leading to the growth of echo chambers and a rise in hate incidents.
Hateful extremists are incentivised to spread disinformation and conspiracy theories by increased exposure and recruitment benefits.
Hateful extremist actors typically direct their narratives against ‘out-groups’, but these narratives frame the pandemic in different ways.
While empirical evidence on the effectiveness of online interventions is limited, the literature highlights that fact-checking, counter-speech, takedowns, and education appear to work.
The reviewed literature offers a number of recommendations for the design and delivery of future interventions:
For governments, sources highlight a need to dedicate more resources to combat false information in order to build societal resilience, as well as to conduct or commission further research into the impacts of hateful extremist narratives.
Social media companies and media organisations are also urged to take more responsibility, respectively by managing the content on their platforms and ensuring that outlets adhere to good journalist practices (e.g. avoiding clickbait headlines).
What do we recommend?
The report sets out policy considerations for CCE based on the literature insights identified:
Investing in research could help address evidence gaps and strengthen responses to false information and hateful extremist narratives.
Holding tech companies to account could increase their responsiveness to false information and hateful extremism.
Investing in education could help raise awareness of the dangers of false information and hateful extremism.
Collecting and publishing information regarding indicators of hateful extremism could help improve policy responses.
Collaborating across sectors could ensure that interventions are mutually reinforcing.
Based on the evidence gaps identified, the study highlights areas that would benefit from further analysis and exploration:
Independent and robustly designed evaluations of existing interventions.
Research on ‘directional motivations’ (an individual’s propensity to hold onto existing attitudes).
Studies with broader coverage in terms of geography, languages and online content.