From Consensus to Conflict

Understanding Foreign Measures Targeting U.S. Elections

Marek N. Posard, Marta Kepe, Hilary Reininger, James V. Marrone, Todd C. Helmus, Jordan R. Reimer

ResearchPublished Oct 1, 2020

A voter arrives at a voting station in Milwaukee

Photo by Jim Young/Reuters

Key Findings

  • Foreign interference in U.S. politics has been a concern since the nation was founded.
  • Russian information efforts aim to elicit strong reactions and drive people to extreme positions to lower the odds they will reach a consensus—the bedrock of U.S. democracy.
  • New technologies have made Russia's information efforts easier to implement than the propaganda campaigns that the Soviets conducted during the Cold War.
  • Studies about how to defend against these efforts have focused on different units of analysis: Some studies focus on the original content; others focus on how this content spreads within networks; and still others focus on protecting consumers.
  • To respond to foreign interference, we recommend (1) taking a holistic approach that anticipates which groups of Americans are likely to become targets and (2) designing evidence-based preventive practices to protect them.

Throughout the remaining U.S. political campaign season of 2020, Russia might try again to manipulate and divide U.S. voters through social media. This report is the first in a four-part series aimed at helping policymakers and the public understand—and mitigate—the threat of online foreign interference in national, state, and local elections.[1]

Concerns over foreign influence in U.S. politics date back to the founding of this country. Alexander Hamilton warned about "the desire in foreign powers to gain an improper ascendant in our councils" (Hamilton, 1788). George Washington's farewell speech cautioned that "foreign influence is one of the most baneful foes of republican government" (Washington, 1796). During the Civil War, the Confederacy solicited the support of Britain and France against the Union (Central Intelligence Agency, undated). In 1940, the British covertly intervened in the U.S. presidential election in hopes to garner support for U.S. intervention in World War II (Usdin, 2017). During the Cold War, the Soviet Union operated a sophisticated program involving covert and overt information efforts against the United States (Jones, 2019; Schoen and Lamb, 2012).

More recently, the U.S. Senate Committee on Intelligence presented evidence that Russia directed activities against state and local election infrastructures and tried to spread disinformation on social media during the 2016 presidential election (Select Committee on Intelligence of the United States Senate, 2019, undated, 2020). In 2018, the Department of Justice indicted the Internet Research Agency LLC, located in St. Petersburg, Russia, for interfering in U.S. elections as far back as 2014 (United States v. Internet Research Agency LLC, 2018).

Given these past and likely extant threats to U.S. elections, the California Governor's Office of Emergency Services asked the RAND Corporation's National Security Research Division for research to help them analyze, forecast, and mitigate threats by foreign actors targeting local, state, and national elections.

This four-part series (Figure 1) will present

  • what the literature says about information efforts by foreign actors
  • the results of an analysis of social media to identify potential exploits
  • a survey experiment to assess interventions to defend against some of these exploits
  • qualitative interviews of survey respondents to understand their views on falsehoods.

Figure 1. What This Series Covers

Disinformation Series

  1. Part 1 (this report) Reviews what existing research tells us about information efforts by foreign actors
  2. Part 2 Identifies potential information exploits in social media
  3. Part 3 Assesses interventions to defend against exploits
  4. Part 4 Explores people's views on falsehoods

In this report, we review some of the research on information efforts by foreign actors, focusing mainly on online environments.[2] First, we review what we believe is the intellectual basis of existing Russian information efforts: reflexive control theory.[3] Second, we review examples of information efforts by the Soviet Union and Russian Federation. Third, we review select research on strategies that inform how to defend against online information efforts.

We recommend that any strategies for responding to foreign information efforts be broad rather than narrow and that they anticipate which subgroups of Americans are likely targets of information efforts by foreign adversaries. Additionally, there is a need to develop evidence-based preventive interventions for those who are most likely targets within our society.

Marek N. Posard describes several broad risks of foreign interference in American democracy and explains how Russia may use reflexive control theory to cause disruption in the 2020 U.S. Election.

Modern Methods Are Rooted in the Cold War

We believe that reflexive control theory is, in part, the intellectual basis of current information efforts targeting the United States that are perpetrated by Russia and its proxies.[4] Scholars have described reflexive control as a means of conveying information to others that leads them to make some predetermined decision (Thomas, 2004). Reflexive control theory is a formal theoretical research program exploring this technique; it was developed by Vladimir Lefebvre and others and first appeared in Soviet military literature in the 1960s (Chotikul, 1986; Radin, Demus, and Marcinek, 2020; Thomas, 2004). Unlike game theory, this theory does not assume that individuals act rationally; rather, it assumes that people act "according to their image of the world and their image of their adversary's image of the world."[5] (Appendix A has a more detailed description of the reflexive control theory.)

In general, reflexive control theory presents the world as a set of binary relationships: People are either in conflict or cooperation with one another, and their actions are either passive or aggressive in nature (Lefebvre, 1965, 1966). Although the binary view is an oversimplification, this assumption is based on how some historians characterize human relationships in the Soviet Union.[6]

The model structure implies that a person's decisions depend on what is socially desirable, insofar as they perceive what is desirable to others. Outside parties can manipulate this perception, which forms the basis for reflexive control as a tool for understanding information efforts. Belief in falsehoods is merely a byproduct of reflexive control, not the end goal. Disseminating false content is a tactic to manipulate one group's view of others.

We have identified at least two key features of reflexive control theory that seem applicable to recent information efforts targeting U.S. elections.

One Feature of Reflexive Control Theory Is to Alter People's Perceptions

First, the model accepts as a given that relationships between individuals are defined by conflict or collaboration. The interpersonal influences between groups of people are inputs. Information efforts are trying to shift people's perceptions, not alter this fundamental group structure. For example, reflexive control does not try to convince people in political party A that they are in conflict with those from political party B. It assumes this conflict already exists. Instead, these information efforts try to amplify the degree that people from party A believe that those from party B view them as adversaries.

Another Feature Is to Elicit Reactions, Not Necessarily to Change Behavior

The second relevant feature of reflexive control theory is the goal of eliciting reactions from targets. The theory views the world as a dichotomy between conflict and collaboration. Successful information efforts sow deep divisions between groups of people and generate a perception of "us" versus "them" that, in turn, elicits strong reactions in people. The ultimate goal is to reduce the probability that groups of people find common ground on issues of public concern. After all, it's difficult to find commonality on such topics as property taxes or farm subsidies when opponents see each other as un-American or racist and each believes these views are deep-seated within the other.

This binary framework also suggests a direct way to operationalize reflexive control. Manipulating someone's perception of others is most feasible, and most effective, if that perception can be collapsed into a one-dimensional caricature. If someone is "one of them" and if "they" are a monolithic group easily summarized by one characteristic, then perceptions are more easily manipulated and reinforced. Thus, we hypothesize that reflexive control in practice is a priming technique that encourages people to self-identify with a particular group and to simplify the characterization of that group as homogenous and ultimately in conflict with another group.

If someone is "one of them" and if "they" are a monolithic group easily summarized by one characteristic, then perceptions are more easily manipulated and reinforced.

Share on Twitter

Thus, foreign adversaries might try to operationalize reflexive control by targeting those who are likely to have the strongest reactions to group-based differences. The sources of these differences are broad: They might focus on race, class, gender, sexual orientation, political affiliation, or geography (urban versus rural dwellers). Foreign information efforts might target these differences by activating the salient group identity within them and framing this identity as being in conflict with people in other groups. By diminishing consensus, adversaries can potentially cause political paralysis.

These tactics and goals are reflected in the observations of researchers who have tracked Russian information efforts over the past several years. The extensive use of memes focusing on particular groups and repeatedly characterizing them in particular ways works to create a simple but consistent framing of that group. These framings tend to have high emotional resonance and often involve passive statements about identity (as opposed to calls for action). They prime the viewer to think about "us" versus "them" in ways that assume the viewer already knows who "we" and "they" are.[7] The uses of cross-platform communication and consistent cross-platform branding give the impression that these characterizations are more universal than they actually are.[8]

Russia is not the only foreign actor who conducts information efforts (see Appendix B for brief descriptions of efforts by China, Iran, and Venezuela), but it appears Russia has been an active presence in the United States in recent years (United States v. Internet Research Agency LLC, 2018; Select Committee on Intelligence of the United States Senate, 2019, undated). Furthermore, we believe that Russia is an equal opportunity exploiter of social cleavages, as evidenced by reporting that online Russian trolls appeared to have targeted white supremacists and civil rights activists (Derysh, 2020; Glaser, 2018; Lockhart, 2018; Ward, 2018). In the next section, we discuss some of the ways that Russia and its proxies appear to conduct these efforts within the United States.

Russian President Vladimir Putin takes part in a video conference call

Photo by Alexei Nikolsky/Reuters

Russia's Aim Is to Preserve the Regime by Weakening the West

Russia has long used various forms of active measures—a specific term that falls within the broader category of information efforts—against the United States, with a focus on U.S. domestic issues. Active measures (in Russian, aktivinyye meropriatia or aktivka) are covert and overt information efforts organized by a government to affect a target's domestic and foreign politics and are used as an instrument of power (Godson and Shultz, 1985). These measures have helped shape the course of international and domestic events in Russia's favor and helped subvert actions and trends that contradict the government's intentions.

Active measures are covert and overt information efforts organized by a government to affect a target's domestic and foreign politics and are used as an instrument of power.

Share on Twitter

Although many countries have sought to use active measures, the Soviet Union and then Russia institutionalized them over many decades and advanced them into a comprehensive foreign policy tool, particularly against the United States (Allen and Moore, 2018). This tool is used to undermine democratic governance processes in the United States and its allies with the overarching aim of weakening the United States and advancing Russia as a global power. This would then support Russia's view of itself as the promoter of a world order and values that are alternative to the ones represented by the United States and its allies and their view of the liberal rules-based world order (Government of Russia, 2016; Radin and Reach, 2017; Stent, 2019).

This section is divided into two parts. First, we discuss the objectives of Russian information efforts in the post–Cold War era. Second, we review some of the ways that Russia has applied these efforts against the United States. The research for this section draws on open-source publications, such as official documents, research reports and analysis, and commentary and case study databases by Western and Russian authors.

Russian Information Efforts Reflect Four Objectives

In general, a key strategic aim of Russia's information efforts is to ensure the survival of its regime and diminish the global dominance of the United States and its allies. Russia seeks to make Western-style democracy less attractive by damaging the global status of the United States, destabilizing it internally, and weakening its unity with Western allies (Taylor, 2019). Russia seeks to achieve these aims by using asymmetric and often long-term methods, which are sowing discord, undermining democratic forms of governance, and seeking to compromise the core values of the United States (Jenkins, 2019). Moreover, examples of active measures toward Europe suggest that the aims of information efforts could be "opaque, as they are designed to produce second- and third-order effects" (Pomerantsev, 2015b). In reality, that plays out as the target continually guesses the real purpose of the information efforts targeted against it.[9]

Open-source reports crystalize Russia's four main objectives for its active measures in the United States:[10]

  1. Polarize and disrupt societal cohesion by exacerbating important and divisive issues, such as race, social class, and gender.
  2. Undermine public confidence in democratic institutions and processes.
  3. Spread confusion, generate exhaustion, and create apathy.
  4. Gain strategic influence over U.S. political decisionmaking and public opinion.

Polarize and Disrupt Societal Cohesion by Exacerbating Divisive Issues

Russia seeks to manipulate important and divisive social and political issues as a way to alter public opinion, create and exacerbate fissures, and disrupt political processes. According to Russian author Alexander Doronin, subversion by inflating conflict situations, specifically ones that encourage the feelings of infringement and dissatisfaction with those who are perceived to be more privileged, might lead to the creation of a "fifth column"—a group that is sympathetic to or works in favor of the adversary (Doronin, 2010).

Open-source reports indicate that Russian-backed attempts to sow discord in U.S. society have largely focused on exploiting racial and immigration issues and that they both make use of existing extremist movements across the U.S. political ideological spectrum and create new such movements. Russian-backed efforts have even promoted secessionist initiatives (e.g., CalExit and Texit) (Yates, 2017). Russia and its proxies use the strategies of generating extreme anger and suspicion on political issues when trying to co-opt some politically right-leaning groups. When targeting some in the African American community, it has tried to build on the issues of societal alienation and police brutality (Lockhart, 2018).

Undermine Public Confidence in Democratic Institutions and Processes

The Russian government seeks to undermine the U.S. democratic system and institutions by disrupting public trust and manipulating the public's lack of familiarity with the institutions and processes (Spaulding, Nair, and Nelson, 2019; Thomas, 2015).[11] It tries to achieve this by exploiting the vulnerabilities of a democratic governance system, especially populist agendas (Aceves, 2019; Conley et al., 2019; Endicott, 2017). Studies on Russian influence in Europe have found that unstable and weak democratic institutions and political and economic volatility make a country more vulnerable to Russian influence, manipulation, and even state capture (Conley et al., 2019). Disinformation has become a key part of undermining trust between society and government institutions in democratic countries. Some Western authors draw a link between the spread of disinformation and declining citizen confidence, which further undermines trust in official information sources and pushes people toward so-called alternative news sources (Bennett and Livingston, 2018).

Spread Confusion, Generate Exhaustion, and Generate Apathy

Spreading confusion and obfuscation, specifically through disruption or denigration of truthful reporting, is one part of Russia's contemporary propaganda model (Paul and Matthews, 2016). Over time, it appears that Russia has become more confident in using falsehoods to make people confused, paranoid, and passive and to reduce their ability to understand what is true and create a perception that "nothing is ever knowable" (Endicott, 2017; Paul and Matthews, 2016; Soufan Center, 2018). Disseminating intentionally false news stories online and in traditional media can give repeated falsehoods legitimacy and has the potential to create an environment of confusion and disorientation (Bennett and Livingston, 2018). Confusion is directly relevant to the ability to make decisions. One Russian author writes that decisionmaking is greatly affected by the level of confidence in signs and their meanings—i.e., confidence about reality, meaning, and facts (Karjukin and Chausov, 2017). Likewise, Western marketing literature speaks about "consumer confusion" through information overload and ambiguous and misleading information, making consumers less likely to make rational choices, less decisive, more anxious, and exhibiting lower levels of trust and understanding (Walsh and Mitchell, 2010). This is in line with RAND's recent Truth Decay research about the growing conflict between fact and opinion, the increasing volume of opinion over fact, and the public's diminishing trust in factual information (Kavanagh and Rich, 2018).

Gain Strategic Influence over U.S. Political Decisionmaking and Public Opinion

Russia seeks to gain strategic influence in the U.S. decisionmaking process by developing access to policymaking circles and influential groups and increasing control over U.S. public opinion. These targets are assets that might knowingly or unknowingly help Russia gain long-term political influence through Russia's making relatively low-profile investments today. For example, Russian agent Maria Butina was sentenced to 18 months in prison in 2019 and then deported after the U.S. Department of Justice accused her of "infiltrating organizations having influence in U.S. politics, for the purpose of advancing the interests of the Russian Federation" (U.S. Department of Justice, 2018; also see Lamond, 2018). Prior to that, Russia attempted to use information efforts to curb fracking in the United States. The decline in oil prices in 2015 had a severe impact on Russia's economy, which is highly dependent on oil and natural gas. One report concluded that Russia exploited genuine concerns about the environmental impact of fracking in its covert efforts to curb the practice, which produces a cheaper source of energy that threatens Russia's exports to Europe.[12] As a result, the Department of Justice charged three Russian agents with spying on U.S. "efforts to develop alternative energy resources." Russia's information campaign included releasing a documentary about illnesses allegedly triggered by fracking in the United States using manipulated testimonies from trusted sources (Rogan, 2015). Russia has also sought to support and establish relations with social movements in the United States and Europe that tend to operate outside the traditional political party structure, such as secessionist movements in California and Texas.[13]

Technology Puts New Twists on Cold War Techniques

Although the techniques that were tried and tested during the Cold War have not been forgotten, new technologies and increased cultural and economic links between Russia and the United States offer a wider array of means and methods by which to manipulate information. Interactions today have been augmented with internet-based media, social networking sites, trolls, and bots. Technological development has opened new means of manipulation via information and facilitates more-scalable, bigger, and more-effective operations with minimal expense. The role of the internet has been recognized at the highest levels in Russia. In 2014, Russian President Vladimir Putin noted that "the rapid progress of electronic media has made news reporting enormously important and turned it into a formidable weapon that enables public opinion manipulations" (Pomerantsev, 2015a). One Russian author further elaborates that even small amounts of information distributed during "crisis situations" might lead to serious results (Doronin, 2010).

Russian active measures do still employ more-traditional methods—which in today's world are facilitated by the availability of informal communication networks via economic, cultural cooperation, expatriate, or proxy individuals or groups, as well as other more-conventional means. This seemingly reflects the views of Russian military thinker Colonel S. Leonenko, who pointed out that information technology poses a challenge to reflexive control because computers lack "the intuitive reasoning of a human being" but also offers new methods of influence—e.g., anonymity, whitewashing of false information, and wide dissemination (Thomas, 2004). Different techniques are not isolated from each other, and campaigns often move from the virtual and media environment into the real world by rallying protests online, hiring individuals to organize rallies, or hiring individuals to participate in rallies and protests.

The techniques used might vary because they are tailored to the issue, environment, target, and intended result (Doronin, 2010). Russia uses a wide variety of methods that are tailored to situations, and it continually seeks opportunities to achieve the aims and objectives we have described. Figure 2 displays several widely used techniques identified in open-source reports.

Figure 2. Overview of Select Russian Active Measures

Tailored Disinformation
Pit different groups against each other by identifying content and topics to which each targeted group might be most susceptible.
Amplified Conspiratory Narratives
Promote or denigrate an issue, sow distrust, and spread confusion by disseminating constitutional narratives, rumors, and leaks.
Paid Advertising
Push people to like pages, follow accounts, join events, and visit websites.
American Asset Development
Reduce the likelihood of detection by recruiting Americans to perform tasks for handlers.
Narrative Laundering
Move a narrative from its state-run origins to the wider media ecosystem through witting or unwitting participants.
Hack and Leak Operations
Illegally procure information and share via platforms such as WikiLeaks.
False Online Personas
Create false personas, sometimes with information that belongs to real people, to hide real identities.
Social Media Groups
Exacerbate existing issues, gather information, and recruit for events by creating social media groups dedicated to divisive issues.
Memes and Symbols
Utilize memes to create easy-to-share snippets of information that can emotionally resonate with people.
Secessionist Support
Undermine the United States by establishing links with and supporting secessionist ideas and movements.
Fringe Movement Support
Build support for Russia's values and society by establishing links to extremist groups.

When using these techniques, Russia contacts a variety of individuals and groups from across the political and social spectrum, often reaching for the fringe elements on both the political right and left, even if Russia is not always ideologically aligned with such groups. Its information operation narratives are tailored to different target audiences.

In Europe, for example, Russia has seduced different political groups using anti-EU messages, anti-U.S. hegemony narratives, and appeals to those interested in preventing fracking (which appears to benefit Russia's interest in maintaining European dependence on Russian gas). In the United States, Russia has seduced some religious conservatives with anti-LGBT+ stances (Pomerantsev, 2014).

Several U.S. authors have tackled the relationship between Russia and fringe groups, politically extreme right- and left-leaning groups, the so-called angry young men groups, motorcycle gangs, and fight clubs. They note that these groups might become⁠—sometimes unwittingly—Russia's agents of influence. Michael, 2019, writes that Russia's cultural conservatism, nationalist government, and large white population allows it to be perceived as a potential ally to the political far right in the United States, especially on issues for which some on the political far right feel they are in the minority in a multicultural and liberal environment. According to Carpenter, 2018, manipulating extreme political groups is part of Russia's strategy to undermine Western democratic institutions: Russia seeks out marginal groups and social outcasts who can be motivated to fight the institutions of their own country and amplify divisive narratives, thus providing Russia with a "shield of deniability" that it can use to suggest that any links between Russia and such groups occurred extemporaneously. It is not clear in the literature whether such "partnerships are always marriages of convenience or are genuine partnerships based on shared values" (Carpenter, 2018).

In the next section, we review a sample of recent research to better understand what types of practices might help respond to these information efforts.

Research Focused on Responding to Foreign Election Interference

In this section, we present results from a systematic review of research related to information efforts and foreign election interference. We reviewed a total of 142 documents and focused on 78 that featured data. (For the methodological details of this effort, see Appendix C.) We begin by describing a framework for organizing this literature, followed by a review of some general trends.

Three Features Describe How Russia's 'Firehose of Falsehoods' Works

Russia's information efforts have been described as a "firehose of falsehood" because they produce large volumes of partial truths and objective falsehoods, continually and repetitively, via multiple channels—e.g., text, video, audio, imagery (Paul and Matthews, 2016).[14] Here, we describe the pathology of this firehose. Figure 3 describes three key features of this pathology: production, distribution, and consumption of content (Matthews et al., forthcoming).

Figure 3. Framework for Understanding the Pathology of Falsehoods

Production
Different kinds of falsehoods (malinformation, misinformation, and disinformation) are produced.
Distribution
Falsehoods are distributed via social networks, where they can reach people who were not directly targeted.
Consumption
Falsehoods are consumed by members the general public.

We found that the unit of analysis—defined as the subject (i.e., who or what) of a study—typically differs for each of the three features in Figure 3.[15] For production, the unit of analysis tends to be the content itself, including malinformation (e.g., leaked documents), misinformation (e.g., misleading content that features half-truths), and disinformation (e.g., complete falsehoods, such as forgeries or fictitious-declarative statements). For distribution, the unit of analysis is the social network on which users share original content or create derivative content to share with others.[16] For consumption, the unit of analysis is the individual user who views and shares this content—typically, individual users are the general public who express their support for content but they can also be inauthentic users, such as bots.

The research on foreign information efforts does not always reflect the neatly patterned framework described in Figure 3, and there is no shortage of alternative frameworks and models that describe various features of information efforts. For example, the firehose of falsehood framework characterizes Russian propaganda efforts as producing large volumes of rapid, continuous, and repetitive content through multiple channels (Paul and Matthews, 2016). Other research has identified a "Russian disinformation chain" that describes how Russian leaders use media proxies (e.g., Russia Today or Sputnik) and amplification channels (e.g., social media platforms or U.S. news media) to reach consumers (Bodine-Baron et al., 2018). Studies have revisited the terms and definitions surrounding these information efforts. For example, some have employed the broad term "hostile social manipulation," which encompasses any purposeful, systematic generation and dissemination of harmful information (Mazarr et al., 2019, p. 15). Recently, scholars have called for a common lexicon for describing the characteristics of information efforts (Paul and Matthews, 2018). Given these debates, we use the framework in Figure 3 as a starting point to help organize some relevant lines of research.

Production Focuses on New Content Containing Falsehoods

The unit of analysis for research related to production is the content itself. In general, the studies in our sample focused on two topical areas. This first was the targeting and specific features of content. For example, one study examined a data set of 705,381 unique accounts during the 2016 presidential election to show how bots skew perceptions of candidates on social media (Heredia, Prusa, and Khoshgoftaar, 2018).[17] In another study using Twitter data, researchers found evidence that Russian information efforts targeted both conservative and liberal conversations online by impersonating Americans (Starbird, Arif, and Wilson, 2018). Although many of these studies focused on U.S. online communities, we did identify relevant studies from other countries. In one study from China, for example, researchers analyzed microblog posts from Sina Weibo and Tencent Weibo platforms related to the avian influenza A (H7N9) (Chen et al., 2018). Of the 1,680 microblog posts, researchers classified about 20 percent (n = 341) as misleading. The authors report that users who posted misleading messages had the highest average rank of reposts compared with other types of messages, but they ranked lowest in number of followers and existing posts.

The second topical area focuses on automated methods for classifying content that contains falsehoods within information environments. For example, one proposed framework for detecting falsehoods analyzes the text (e.g., title length, percentage of proper nouns) and verifies the accuracy of information compared with a corpus of trusted sources (Ibrishimova and Li, 2019). In another study, the authors describe a classifier that estimates the probability that a news story is false using such features as the headline (e.g., whether a news title had capital letters), authorship characteristics, sourcing, origin or publisher, and the content's political perspective (Snell et al., 2019). Other research has applied topic modeling to examine differences in Russian content versus English content (Chew and Turnley, 2017).

Distribution Research Explains How Falsehoods Spread

The unit of analysis for research related to distribution is the social network. We highlight two types of studies: those focusing on the role of social media platforms in preventing the spread of online falsehoods and those focusing on the role of machine-learning approaches to accomplish the same task.

First, we found several studies that focused on the role of social media platforms in this area. Research on how the conspiracy theory linking vaccines with autism spread via Twitter and Reddit showed that each platform served different functions in the dissemination of this disinformation (Jang et al., 2019). Twitter was found to drive news agendas while news content drives Reddit discussions. Understanding how platforms contribute to different aspects of the diffusion chain would help in detecting the flow of disinformation articles and targeting appropriate interventions on these platforms. Likewise, another study investigated the impact of WhatsApp's policy that limited the number of times a user could forward a message to just five. On public WhatsApp groups in Brazil, Indonesia, and India, this policy helped slow the spread of disinformation but did not block it entirely (de Freitas Melo et al., 2019). The openness and control of the platform has an impact on the flow of disinformation. A study comparing Twitter's open platform with Sino Weibo's more-closed, government-controlled platform showed that sharing misinformation about Ebola outbreaks was less prevalent on Chinese microblogs (Fung et al., 2016). Another study on the spread of the chemtrails conspiracy on several online platforms suggests that anonymity online appears to help spread conspiracies (Tingley and Wagner, 2017).

Second, there was a growing body of research about using machine-learning models to track, understand, and mitigate the spread of existing falsehoods on social media. One study proposed using these models for situational awareness in understanding a multipolar political landscape prior to an election. In a study of 60 million exchanges among 2.4 million Twitter users around the 2017 French election, researchers qualified and quantified various characteristics of online political communities, tracking their temporal evolution, structures, alliances, and semantic features during the campaign (Gaumont, Panahi, and Chavalarias, 2018). This situational awareness might provide a foundation for understanding how falsehoods spread across a country's electorate and what communities are most affected. Other studies looked at how to detect disinformation accurately in these conversations. A graph-based machine-learning model tracked the stance of 72 rumors in more than 100,000 tweets in a semisupervised approach (Giasemidis et al., 2020). This model measured whether the tweeter felt positive, neutral, or negative about the rumor and was used to help predict the accuracy of information. Results showed the algorithm was fast and accurate, exceeding 80 percent average accuracy. Using semisupervised machine-learning algorithms to classify the stance of rumor tweets might lead to fast, scalable, and accurate differentiation of information. Advances in machine-learning might help researchers accurately classify false information, understand its spread, and gain insight into how users receive it.

Consumption Research Focuses on the Role of Consumers

The unit of analysis for research related to consumption is the individual consumer. Much of the research focused on consumer views of content and the impacts of fact-checking on these views.

The unit of analysis for research related to consumption is the individual consumer. Much of the research focused on consumer views of content and the impacts of fact-checking on these views.

Share on Twitter

The first theme focused on consumers' features and views surrounding exposure to falsehoods, which could lead to more consumers taking action against false information. Pew surveyed representative samples of adults in 11 emerging economies and found that respondents felt they regularly encountered false information on social media (Smith et al., 2019). Furthermore, an analysis of a three-country survey comparing voters in the United States, the United Kingdom, and France reports that conservative voters in the latter two countries were no more likely than nonconservative voters to report that they were exposed to falsehoods. This was not true in the United States. Those on the political right in the United States reported feeling much more exposed to false news and that they could trust news much less than did people reporting to be nonconservative voters (Koc-Michalska et al., 2020). This deterioration of trust in news media for U.S. conservatives implies that news and government might need to communicate differently with these consumers.

Additionally, Budak, 2019, researched how the 2016 presidential election correlated with the prevalence of false news online. Budak randomly sampled 5,000 tweets each from tweets mentioning President Donald J. Trump or former Secretary of State Hillary Rodham Clinton from May 2014 to January 2017. This study showed that the prevalence of falsehoods in news increased over the course of the campaign and that voter perceptions of former Secretary of State Clinton were more in line with false news accounts. Finally, in a study of more than 5 million tweets, researchers applied machine-learning algorithms embedded in network models to find user attributes that accurately predict how they will react to online content (Gallo et al., 2020). Among four factors (the Big Five personality traits, time interval, predominant sentiment, and sentiment distribution), they found that the Big Five personality traits (i.e., openness to experience, conscientiousness, extroversion, agreeableness, neuroticism)[18] had the most impact on user reactions to false information.

A second theme focused on the effects of fact-checking of content on consumers, which is a popular approach to countering false information. One study looked at Facebook's "rated false" and "disputed" tags on different headline types (false pro-President Trump, false anti-President Trump, and true headlines) using respondents from Amazon's Mechanical Turk crowdsourcing platform (Clayton et al., 2019). Results showed that both tags reduce beliefs in false news, but the rated false tag was more effective than tagging headlines as disputed.

Because not all news stories can be checked and tagged, providing general warnings might help alert users to false news. A related study on StopFake—a Ukrainian fact-checking organization founded in response to Russian information efforts in 2014—showed that fact-checking organizations have different focuses. StopFake focused on finding falsified evidence, such as manipulated or misrepresented images and quotes. However, most U.S. fact-checking organizations assume quotes and images are legitimate and focus on evaluating nuanced political claims. Some researchers claim most U.S. fact-checking groups are ill-equipped to detect disinformation that is wholly falsified (Haigh, Haigh, and Kozak, 2018).

A voter is seen at a polling station during New York primary elections

Photo by Brendan McDermid/Reuters

Conclusions and Recommendations

This report reviews some of the research that is relevant to foreign information efforts targeting U.S. elections. It provides a general framework for understanding these efforts and will inform our analysis in future volumes of this series. We focused on efforts by Russia and its proxies because these actors appear to have been the most active in recent years, but we note that other state and nonstate actors also might target the United States. As a result of this work, we reached four general conclusions.

Conclusions

Foreign Interference in U.S. Politics Is Not a New Phenomenon

Foreign influence in U.S. domestic affairs dates back to the founding of this country, and there are several examples in our 244 years of existence.

How the Russians Have Tried to Interfere in Recent U.S. Elections Follows Some Logic

We hypothesize that reflexive control theory—a theoretical research program first developed in the 1960s and used by the Soviet military—is part of the intellectual basis for current Russian efforts. At its core, reflexive control theory assumes that people live in a polarized world defined by either cooperation or conflict and that people make decisions based on these views. We believe that Russia is trying to generate, spread, and amplify falsehoods that distort views of "us" versus "them," with the desired outcomes of (1) driving people to view each other as either friends or adversaries, or (2) exhausting people to the point that they disengage from civic affairs altogether, with the result of political paralysis.

Russia's Tactics Aim to Polarize Americans and Paralyze the U.S. Political Process

These tactics consist of attempts at polarizing and disrupting social cohesion. Some tactics aim to exacerbate divisive issues, such as racial inequities or immigration. Others target public confidence in democratic institutions and processes as a way to undermine social trust. Underlying these efforts is a broader tactic of using falsehoods to spread confusion, drive groups of people to extreme positions, and generate collective exhaustion within U.S. society. Finally, there is evidence that Russia has tried—and continues to try—to gain direct influence over the U.S. political decisionmaking process, although we do not know how effective these efforts have been.

Our Sample of Relevant Research Revealed Some Trends for Responding to Falsehoods

Although our sample of studies is not representative of all research on this topic, it does provide some ideas for emerging practices in responding to foreign information efforts. Much of this research is fragmented and cuts across multiple disciplines, causing us to organize it by primary unit of analysis: the production of new falsehoods, the distribution of existing falsehoods, or the consumers of this content.

Research on production largely focused on targeting of falsehoods and the features of this content. For studies on the distribution of existing falsehoods, research focused on the role of social media platforms in preventing the spread of online falsehoods and the role of machine-learning models to mitigate this spread. Finally, research on consumption largely focused on consumer views of content and the impacts of fact-checking.

Recommendations for Responding to Foreign Information Efforts

Democracy depends on citizens finding consensus with people whom they might view as different from them. Foreign adversaries have made attempts at undermining the formation of this consensus and will continue to do so.

Share on Twitter

Foreign interference has occurred throughout U.S. history and likely will continue in the future. Russia seems to have advanced its information efforts in recent years, and we suspect other countries will try to emulate these practices. We offer three recommendations for how to start designing responses to these existing and emerging threats that target U.S. democracy. In future volumes of this series, we will present results with more-specific recommendations for responding to these foreign information efforts.

A Holistic Strategy Is the Optimal Response to Information Efforts by Foreign Countries

During the Cold War, Secretary of State Lawrence Eagleburger recommended a "balanced approach" to Soviet information efforts that neither ignores the threat nor becomes obsessed with it (Eagleburger, 1983). Our assumption is that reflexive control theory is part of the intellectual basis for Russian efforts targeting U.S. elections. The unit of analysis of this theory is broad, spanning the entirety of U.S. society and any particular piece of online content, social media platform, or individual consumer. We recommend that any defensive strategy account for the complex relationships among the production of falsehoods, how others distribute content (particularly online), and the impacts of this content on consumers.

Any Defense Should Anticipate Those Who Are Likely to Become Targets of These Efforts

We believe that a key goal for information efforts is to alter people's perceptions to amplify a view of "us versus them," with political paralysis as the ultimate goal. Social or political issues tied to identities (such as race, gender, social class, or political affiliation) that hold meaning for people are useful starting points because false content tied to these characteristics might elicit strong reactions (Marwick, 2018). We suspect that foreign efforts will likely produce content that plays on these identities in an effort to amplify differences and deepen preexisting fault lines in U.S. society. Thus, we recommend developing strategies that anticipate which subgroups are most vulnerable to such efforts without publicly shaming these groups or targeting specific individuals.

Any Response Should Attempt to Protect Potential Targets Against Foreign Information Efforts

The antidote to manufacturing intergroup conflict is convincing people that they have more in common with those who are different from them than they might believe at first glance. We recommend collecting, analyzing, and evaluating preventative interventions to protect people from reacting to falsehoods meant to divide the country (e.g., public campaigns that emphasize shared interests of Californians, public warnings about broader information efforts by foreign adversaries, or media literacy programs for subgroups that are potential targets).

In conclusion, democracy depends on citizens finding consensus with people whom they might view as different from them. Foreign adversaries have made attempts at undermining the formation of this consensus and will continue to do so. There is a logic to these attempts. The best defense is a holistic approach that accounts for the preexisting fault lines that exist within U.S. society.

Download the Full Report (includes references and appendixes) ⤴

Notes

  • [1] We note that the lines between foreign and domestic are blurred surrounding information efforts.
  • [2] This review provides a framework for interpreting the results in subsequent parts of this series. We draw from the Chairman of the Joint Chiefs of Staff in defining what information efforts are and where they take place, and define information efforts as activities that "influence, disrupt, corrupt, or usurp the decision making of targets while protesting one's own" (Chairman of the Joint Chiefs of Staff, 2014, p. A-1). These efforts might include authentic content (such as that from human trolls) and inauthentic content (such as that from bots) that is created and disseminated by state-sponsored actors or their proxies (who could be willing or unwilling participants). Such efforts exist in an information environment, broadly defined as "the aggregate of individuals, organizations, and systems" (Chairman of the Joint Chiefs of Staff, 2014, p. GL-6). These environments involve any collection of people who interact with each other online or in person. (For more details, see Chairman of the Joint Chiefs of Staff, 2014.)
  • [3] Vladimir Lefebvre personal e-mail with the lead author, March 3, 2018.
  • [4] This hypothesis is largely based on past research, but Vladimir Lefebvre confirmed that while he did not have specific information concerning Russia's use of the theory, it was his belief that the Russian Federation uses reflexive control theory in its information efforts against the United States (Lefebvre personal e-mail with the lead author, March 3, 2018).
  • [5] The mathematical framework describing reflexive control is distinct from the language of most game theory, although both deal with the strategic use of information. Reflexive control theory does not use an equilibrium concept to predict outcomes, and it does not assume agents behave rationally according to a utility function; rather, it assumes agent behavior is internally consistent with an ethical belief function (Lefebvre and Farley, 2007, p. 636).
  • [6] In general, reflexive control theory assumes that ethical frameworks differ in the United States and Soviet Union: The former assumes that bad ethical means should not be implemented for good ethical outcomes; the latter is less concerned when there is conflict between bad means and good goals. For more details, see Chotikul, 1986; Lefebvre and Farley, 2007; and Umpleby, 2016.
  • [7] See, for example, the "roster of identities" listed by DiResta et al., 2019, p. 11.
  • [8] The cross-platform tactic has been described as a "media mirage" that creates an "immersive information ecosystem," (DiResta et al., 2019, pp. 14, 42; Howard et al., 2018, p. 8).
  • [9] Pomerantsev, 2015b, provides an example: Estonia must constantly guess whether the Kremlin's threats about Russia having the capabilities to invade Estonia are an effort to show real intention, affect the morale of the Estonian population, or seek publicity from reporters elsewhere.
  • [10] Note that this selection of key objectives is based on an analysis of a selection of open-source publications and is focused on aims that are relevant for U.S. domestic politics only.
  • [11] For a more detailed discussion of Russia's approach to information warfare, see Thomas, 2004.
  • [12] Rogan, 2015, concludes that Russia used a "three-pronged strategy" and tried to (1) make covert payments to well-intentioned environmental groups in the West, often even without their knowledge, (2) gather intelligence on the U.S. energy industry, and (3) mount an information campaign against fracking, calling it a hoax.
  • [13] Tweets originating in Russia supported California and Texas secession movements (Martin and Shapiro, 2019).
  • [14] We focus on information efforts by Russia and their proxies because they appear to be some of the most organized in the world. Furthermore, much of the contemporary research on this topic is focused on Russia and its proxies. We acknowledge, however, that other countries also engage in similar efforts. For more information on those efforts, see Appendix B.
  • [15] Research on information efforts often touches on different units of analysis. Thus, many documents in our sample are classified across the pathology of falsehoods described in Figure 3. See Lewis-Beck, Bryman, and Liao, 2004.
  • [16] We broadly define users to include humans and software agents, commonly known as bots.
  • [17] This research could be categorized under consumption because it also looked at public opinion as measured by sentiment and volume baselines in the data.
  • [18] The OCEAN model/Big-Five personality traits in this article assigned scores to people's character based on the following five dimensions: openness, conscientiousness, extraversion, agreeableness, and neuroticism. For more details, see Gallo et al., 2020, p. 921.

Topics

Document Details

Citation

RAND Style Manual
Posard, Marek N., Marta Kepe, Hilary Reininger, James V. Marrone, Todd C. Helmus, and Jordan R. Reimer, From Consensus to Conflict: Understanding Foreign Measures Targeting U.S. Elections, RAND Corporation, RR-A704-1, 2020. As of September 19, 2024: https://www.rand.org/pubs/research_reports/RRA704-1.html
Chicago Manual of Style
Posard, Marek N., Marta Kepe, Hilary Reininger, James V. Marrone, Todd C. Helmus, and Jordan R. Reimer, From Consensus to Conflict: Understanding Foreign Measures Targeting U.S. Elections. Santa Monica, CA: RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RRA704-1.html.
BibTeX RIS

This research was sponsored by the California Governor’s Office of Emergency Services (Cal OES) and conducted within the International Security and Defense Policy Center of the RAND National Security Research Division (NSRD).

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.