Detecting Conspiracy Theories on Social Media

Improving Machine Learning to Detect and Understand Online Conspiracy Theories

by William Marcellino, Todd C. Helmus, Joshua Kerrigan, Hilary Reininger, Rouslan I. Karimov, Rebecca Ann Lawrence

Download

Full Document

FormatFile SizeNotes
PDF file 1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback108 pages $22.50 $18.00 20% Web Discount

Research Questions

  1. How can we better detect the spread of online conspiracy theories at scale?
  2. How do online conspiracies function linguistically and rhetorically?

Conspiracy theories circulated online via social media contribute to a shift in public discourse away from facts and analysis and can contribute to direct public harm. Social media platforms face a difficult technical and policy challenge in trying to mitigate harm from online conspiracy theory language. As part of Google's Jigsaw unit's effort to confront emerging threats and incubate new technology to help create a safer world, RAND researchers conducted a modeling effort to improve machine-learning (ML) technology for detecting conspiracy theory language. They developed a hybrid model using linguistic and rhetorical theory to boost performance. They also aimed to synthesize existing research on conspiracy theories using new insight from this improved modeling effort. This report describes the results of that effort and offers recommendations to counter the effects of conspiracy theories that are spread online.

Key Findings

  • The hybrid ML model improved conspiracy topic detection.
  • The hybrid ML model dramatically improved on either single model's ability to detect conspiratorial language.
  • Hybrid models likely have broad application to detecting any kind of harmful speech, not just that related to conspiracy theories.
  • Some conspiracy theories, though harmful, rhetorically invoke legitimate social goods, such as health and safety.
  • Some conspiracy theories rhetorically function by creating hate-based "us versus them" social oppositions.
  • Direct contradiction or mockery is unlikely to change conspiracy theory adherence.

Recommendations

  • Engage transparently and empathetically with conspiracists.
  • Correct conspiracy-related false news.
  • Engage with moderate members of conspiracy groups.
  • Address fears and existential threats.

Table of Contents

  • Chapter One

    Introduction: Detecting and Understanding Online Conspiracy Language

  • Chapter Two

    Making Sense of Conspiracy Theories

  • Chapter Three

    Modeling Conspiracy Theories: A Hybrid Approach

  • Chapter Four

    Conclusion and Recommendations

  • Appendix A

    Data and Methodology

  • Appendix B

    Stance: Text Analysis and Machine Learning

This research was sponsored by Google's Jigsaw unit and conducted within the International Security and Defense Policy Center of the RAND National Security Research Division (NSRD).

This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.