Conspiracy Theories Have Much in Common. Their Differences May Hold the Key to Identifying When They Turn Violent.
Dec 22, 2021
Conspiracy theories circulated online contribute to a shift in public discourse away from facts and analysis and can contribute to public harm. Using linguistic and rhetorical theory, RAND researchers conducted a modeling effort to improve machine-learning technology for detecting conspiracy theory language. This report describes the results of that effort and offers recommendations to counter the effects of online conspiracy theories.
Conspiracy theories circulated online via social media contribute to a shift in public discourse away from facts and analysis and can contribute to direct public harm. Social media platforms face a difficult technical and policy challenge in trying to mitigate harm from online conspiracy theory language. As part of Google's Jigsaw unit's effort to confront emerging threats and incubate new technology to help create a safer world, RAND researchers conducted a modeling effort to improve machine-learning (ML) technology for detecting conspiracy theory language. They developed a hybrid model using linguistic and rhetorical theory to boost performance. They also aimed to synthesize existing research on conspiracy theories using new insight from this improved modeling effort. This report describes the results of that effort and offers recommendations to counter the effects of conspiracy theories that are spread online.
Introduction: Detecting and Understanding Online Conspiracy Language
Making Sense of Conspiracy Theories
Modeling Conspiracy Theories: A Hybrid Approach
Conclusion and Recommendations
Data and Methodology
Stance: Text Analysis and Machine Learning