Addresses the issue of too much false information shared on social media by users during U.S. election seasons.
This dissertation addresses there being too much false information shared on social media by users during U.S. election seasons. I characterize false information on social media in stages of Production, Dissemination, and Consumption. Social media platforms have responded by developing Remove, Reduce, and, recently, Inform policies that broadly respond to each stage, respectively. This dissertation investigated potential Inform policies and how social media platforms can apply them. The most applicable policy options are summarized for Facebook, Twitter, and YouTube in short, industry white papers in the appendices. After clarifying terms used in discussing disinformation (Chapter 2), I make the case for social media platforms enacting or expanding their Inform policies to help users resist false information (Chapter 3). I then conduct a literature review of other fields with similar Inform prerogatives (such as public health, advertising, and social psychology) and apply principles social media policymakers could use in constructing their Inform policies (Chapter 4). I then offer two proofs-of-concept to apply Inform policies. I network 7 million tweets using the Louvain community detection algorithm and conduct a text analysis to demonstrate principles for how a platform could find users most exposed to false information. I recommend a framework for how platforms could decide how to locate users most exposed to disinformation in a least-biased way to avoid accusations of political bias (Chapter 5). I then apply the most promising finding from the literature review, inoculation or "pre-bunks," in a survey experiment to help respondents (n = 634) resist behavioral reactions to disinformation memes from the 2016 U.S. presidential campaign. I test how inoculations interact with emotions and how that relates to resistance to persuasion from disinformation. I find that inoculations never hurt and sometimes improved resistance to disinformation memes (Chapter 6). Finally, I suggest platforms like Facebook, Twitter, and YouTube, apply or expand Inform policies through building trust through transparency and communication with users, focusing on evidence-based, direct and indirect approaches for helping users resist false information, and applying principles of influence with what they know of their users to make attractive products to help their users resist false information.
Table of Contents
Persuasion on Social Media: Tools to Guide Social Media Platforms in Making User-focused Interventions
Evidence-based Interventions for Social Media Platforms to Help Users Resist False Information
Identifying Users for Interventions: Social Network and Text Analysis of False Information in Twitter Communities
Inoculating Users Against False Information
Bringing it All Together: Recommendations and Policy Guides for Social Media Platforms to Help Users Resist False Information
Facebook Brief: Help Users Resist Disinformation
Twitter Brief: Help Users Resist Disinformation
YouTube Brief: Help Users Resist Disinformation
Affordances Stoplight Chart
Materials Supporting Survey Analyses