About Our Methodology

This online database provides information about tools currently available or in development to target online disinformation, particularly on social media.

We identified tools through web searches, articles that review tools and advances in this field, and discussions with experts (e.g., those involved in developing or funding tools). We collected the data primarily from company websites, and in some cases, news articles, industry assessments, or research papers. For each tool, we collected and filled in as much information as possible, but it was rare that we were able to fill in every field for a given tool.

We aimed to be as comprehensive as possible in identifying tools that fit our inclusion criteria and as accurate as possible in describing those tools. However, there are new tools emerging constantly and existing tools are being improved and further developed every day. To help improve our data, you can recommend a new tool for the database. Our team will review your submission and follow up as appropriate.

How We Chose the Tools in this Database

Each of the tools included in this database aims to improve the online information ecosystem in some way, such as by shaping the way information is shared or by providing information consumers with tools to better navigate the media and information sources they interact with.

We used a number of inclusion/exclusion criteria:

  1. Each entry is a tool that is either interactive or produces some product the consumer can use or apply to their own web browsing or information consumption. This means that online games, courses, image verification platforms, and disinformation trackers would be included, but websites that offer general-purpose resources would not be. We also include fact-checking tools and platforms, because these kinds of tools provide users with assessments of information quality and veracity.
  2. This database is focused on tools developed by nonprofit and civil society organizations. One goal of the database is to provide a more complete picture of the set of tools available in this space, as well as the gaps that exist. This information will be useful to the general public, as well as to researchers and philanthropic organizations looking for ways to most productively invest available resources. As a result, each entry must be produced or disseminated by a nonprofit entity. We do not include products developed by for-profit companies, even if those tools are provided to the public for free.
  3. Each entry must be explicitly focused on online disinformation. There are some tools—like many ad blockers and other privacy tools—that may serve a counter-disinformation purpose, even if this is not their primary function. However, we include only those that explicitly reference disinformation as the target of the tool.
  4. We focused on U.S.–based tools targeting the U.S. market and tools developed by groups located in the United States. Many of the tools included are applied internationally, but we did not include tools developed elsewhere. We hope to expand to include these tools in future iterations of this tool.

Excluded Tool Types

There are several types of tools relevant to the challenge of online disinformation, but some we intentionally exclude from our database due to their wider remit and their less-direct ties with the disinformation issue. Below, we discuss these tools and how they relate to the disinformation challenge.

Ad blockers
Ad blockers originated as tools narrowly built to block online advertisements. They typically work through the use of whitelists/blacklists that deny flagged sites access to an individual or network. In the case of whitelists, they let through only those sites that are on a preset list.
These types of programs can be useful tools against disinformation because of what ads may bring with them: malware, disinformation, and an opportunity for companies to collect data on individuals who may click on those ads. By limiting the access of advertisers to individuals, ad blockers cut off a valuable chain that contributes to microtargeting and facilitates the spread of disinformation.
More recently, some ad blockers have taken the controversial step of allowing some companies preferred status, meaning that their ads are whitelisted and sent through. Typically, ads have to meet some set of criteria before they are granted this status, and individuals can choose to opt out of even these.
We exclude most ad blockers from the database despite their potential use against disinformation, because tackling the challenge of online disinformation is largely a secondary benefit of these tools. What's more, the size of that benefit varies widely based on the nature of the tool and how it works. We do include ad blockers that explicitly reference their disinformation application.
Privacy Tools
There are many different types of privacy tools, but they all share an interest in protecting users' personal data and online behavior so that individuals are not tracked online by malicious actors, that their data is not stolen, and that they are somewhat protected from microtargeting for ads and information.
The most common privacy tools are VPNs and anonymization tools, which disguise an individual's IP address and online behavior, searches, or location, protecting them from microtargeted ads or information. Despite their potential use against disinformation, we exclude these tools because tackling the challenge of online disinformation is largely a secondary benefit. Plus, the size of that benefit varies widely based on the nature of the tool and how it works. We do, however, include privacy tools that explicitly reference their disinformation application.
Commercial media monitoring tools
These tools are used by corporations to fulfill a wide range of needs, including brand management, social media analytics, and cyber security. These tools do not meet the criteria for inclusion in our database, because they are for-profit and because their focus is on helping companies maintain and improve their reputation, maximize the effect of their advertising and social media campaigns, and ensure the security of their own information and systems.
We acknowledge that commercial media monitors may be useful against disinformation in a couple of ways. First, they can conduct credibility scoring of different websites to ensure companies' ads appear on verified sites. Second, commercial media monitors may identify instances of disinformation against a corporation (on social media or elsewhere) allowing for counter-messaging efforts. But commercial media monitoring may also contribute to the problem of disinformation—if it allows corporations to improve their microtargeting efforts.
Free tools developed by for-profit companies
As noted above, we included only tools developed by nonprofit organizations and excluded even free tools developed by for-profit organizations. Some of these free tools have many of the same objectives as tools included here and, because they are free, may be useful to information consumers.
For example, Newsguard offers a browser extension that provides "Nutrition Label" for a large number of online news and information sites (over 2,000 according to the organization). Each label offers an assessment of the accuracy, transparency, credibility, and quality of the site. Similarly, Trusted News, owned by Factmata, uses a browser extension to provide users about the credibility, bias, and quality of news sources online. Both of these resources, then, are credibility scoring tools similar in many ways to those in our database, except for the nature of their developer. In future iterations, we hope to be able to expand this site to also capture these types of resources.
Tools developed internationally
The database focuses on tools developed within the United States and targeted at the U.S. market. This means that it currently excludes some tools developed internationally that might be useful to U.S.–based users. For example, Newswise by the Canadian organization CIVIX offers media literacy activities and videos and so resembles many of the media literacy tools captured in our database. There are many other examples of valuable international tools, particularly in the area of media literacy. We hope to expand the data presented here to include those in the future.

More About the Data We Collected

Below you'll find more details about the data we collected for each tool. Much of the information on each tool is basic and self-explanatory (e.g., name, description, intendend users), but a deeper explanation may be helpful in other cases. Here are some definitions to help understand how the different aspects of the tools were analyzed and coded.

Tool type
Different tools aim to do different things. We have identified seven types of tools, and each tool is classified into at least one (and up to two) of the categories.
Bot/spam detection
Tools intended to identify automated accounts on social media platforms
Codes and standards
This applies to all tools that establish new norms, principles, or best practices to govern a set of processes or to guide conduct and behavior. In the majority of the tools presented here, codes and standards aim to guard against disinformation or misinformation, to increase the quality of journalism, or to commit individuals or companies to a set of principles.
Credibility scoring
Tools that attach a rating or grade to individual sources based on their accuracy, quality, or trustworthiness
Disinformation tracking
Applies to tools that track and/or study the flow and prevalence of disinformation
Education/training
This applies to any courses, games, and activities aimed to combat disinformation by teaching individuals new skills or concepts. We include only online courses/games/activities that have an interactive component, so a traditional, classroom-based curriculum would likely not be included, but an online training would be included.
Verification
This applies to fact-checking tools that aim to ascertain the accuracy of information.
Whitelisting
Tools that create trusted lists of IP addresses or websites to distinguish between trusted users or trusted sites and ones that may be fake or malicious
Status
Some of the tools in the database are still being developed, so we've identified where each tool is in the development lifecycle: fully operational, pilot program, prototype, initial testing, in development, alpha stage, or beta stage.
Intended users
The audience for which the tool was developed may be the general public, journalists, researchers, or teachers/students.
Cost
We include whether the tool has a cost or is free. Because we focus on tools developed by nonprofits, the majority of products are free. However, there are several cases where an organization provides a free tool, but also offers a more advanced version for a fee.
Tool focus
What we're calling content-focused tools evaluate information—the authenticity of a photo, for example—directly. Process-focused tools, on the other hand, evaluate how information is produced and disseminated.
Method or technology
We've categorized each tool into one of five categories based on the method or technology it uses.
Is the tool automated?
This captures whether the tool operates on its own, through an app or other mechanism, or requires human implementation. Some tools are "mixed," meaning they combine more than one of these methods of operation.
Founding organization
The organization that initially created or spurred creation of the tool.
Founder/Primary Contact
Where possible, we identify a primary contact for the tool, often the original developer or founder.
How is this tool working to address disinformation?
This provides a brief description of how the tool fights disinformation. We focus on its objective and approach, rather than the particular method or technology it uses.
Is there a connection with tech platforms?
This describes how—if at all—a tool is connected to technology platforms such as Google, Twitter, and Facebook.
Who is funding the tool?
A list of organizations, foundations, and/or individuals that fund the tool.
Are there external evaluations?
This field explains whether or not a given tool has been formally evaluated to determine if it effectively counters disinformation as intended. Formal evaluations typically require a randomized control trial (RCT) with pre/post testing. We found in our research that the vast majority of tools aimed at online disinformation have not been evaluated in any formal sense. Some report the broad results of internal reviews and some discuss output metrics about the reach of the tool. For the purpose of this database, in addition to formal evaluatiosn using RCTs, we also consider a few other types of evaluations that use either pre/post assessment or comparisons across users and non-users, with the caveat that these types of evaluations support more-limited conclusions. We also include evaluations of the tool performance—that is, does the tool do what it says it does (e.g., detect bots, check facts)? We note in the evaluation field which type of evaluations we identified for each tool. Where we did not identify an evaluation, we noted "none found."