About the Mass Attacks Defense Toolkit

On this page

The Mass Attacks Defense Toolkit advances efforts to prevent and reduce intentional, interpersonal firearm violence and public mass attacks in the United States. The goal of this toolkit is to provide practical strategies and guidance on deterring, mitigating, and responding to mass attacks for a variety of audiences, including public safety experts, practitioners, policymakers, community groups, and the general public.

The toolkit is organized by the three phases of the Mass Attacks Defense Chain. Each phase contains findings that are relevant for both the whole-of-community perspective and individual community partners, including public safety, education, infrastructure, and government professionals. Our conclusions and recommendations draw on three sources of information: analysis of quantitative data on mass attack plots, including foiled plots, which we collected from open-source and limited-access databases; published research on mass attacks; and interviews with law enforcement agencies throughout the country.


Toolkit findings were synthesized from three primary categories of information:

  1. data on previous mass attacks or foiled mass attack plots
  2. reviews of prior scholarly articles and guidance on mass attacks
  3. responses from subject-matter expert interviews.

RAND Corporation researchers collaborated with RTI International to define, identify, and collect data on cases of mass violence or failed or foiled mass violence plots. Separately, interviewers from the Lafayette Group, Karchmer Associates, and RAND conducted a series of interviews with law enforcement, public safety, and security representatives. Analysis from each effort was used to identify and contextualize recommendations. Following the completion of an earlier version of this toolkit, an expert panel reviewed the findings and materials for operational validity and utility.

Plot Data Collection and Analysis

We began by defining mass attacks and mass attack plots and identifying key details for plots and incidents, then compiling cases into a database for analysis. Of interest were mass attacks, mass attack plots, and mass attack no-incidents or false positives in the United States from 1995 to 2020. We define each as follows:

  • Mass attacks and mass attack plots are defined as any violent attack or plot (conspiracy) to engage in an attack in a public space (including schools and workplaces) in the United States that endangered, or was intended to endanger, the lives of four or more people. In this definition, we exclude attacks specifically related to gangs, organized crime violence, terrorism plots prior to 2002 (to avoid statistical and operational complications from including the September 11, 2001 [9/11] and Oklahoma City attacks), and domestic violence incidents in which the unaffiliated public is not deliberately targeted.
  • No-incident or false-positive cases are defined as those involving a non-preliminary investigation or arrest of an individual suspected of preparing to commit a mass attack (as defined above) in the United States, where it turned out that the individual was, in fact, not likely planning or preparing for such an attack. Although such cases might include an individual being acquitted of charges or a prosecutor dropping charges against an individual, they do not include cases in which an individual has agreed to a plea bargain.

We drew on 27 existing databases of mass attacks in the United States, along with customized Google searches, to identify all three types of cases during this period that met the definitional criteria. The case identification process consisted of three steps. First, we mined the following databases and sources for mass attacks and mass attack plots that matched the definitional criteria.

Databases and Sources

Public Mass Shootings

  • Violence Policy Center's "Concealed Carry Killers" (Violence Policy Center, 2021a)
  • Violence Policy Center's mass shootings involving Large Capacity Ammunition Magazines (Violence Policy Center, 2021b)
  • Mother Jones' "U.S. Mass Shootings, 1982–2021" (Follman, Aronsen, and Pan, 2021)
  • Advanced Law Enforcement Rapid Response Training's (ALERRT's) Active Attack Events data (ALERRT, undated)
  • Federal Bureau of Investigation's (FBI's) "Active Shooter Incidents" (Blair and Schweit, 2014; FBI, 2021) and FBI's "Active Shooter Resources" webpage (FBI, undated)
  • The Violence Project's "Mass Shooter Database," version 2 (The Violence Project, undated)
  • Everytown for Gun Safety's "Ten Years of Mass Shootings in the United States" (Everytown for Gun Safety Support Fund, 2019)
  • Grant Duwe's Mass Shooting Database (Duwe, 2020)
  • Crime Prevention Research Center's Mass Public Shootings Cases spreadsheet (Crime Prevention Research Center, undated)
  • John Lott and Carlisle Moody's Mass Public Shootings in the U.S. (Lott and Moody, 2019)
  • Lankford and Silver's Public Mass Shootings in the U.S. (Lankford and Silver, 2020)
  • Mayors Against Illegal Guns' Analysis of Recent Mass Shootings (Mayors Against Illegal Guns, 2013)
  • U.S. Secret Service's Mass Attacks in Public Spaces reports (National Threat Assessment Center, 2018; National Threat Assessment Center, 2019; National Threat Assessment Center, 2020)
  • Stanford's "Mass Shootings in America" (Stanford Geospatial Center, undated)
  • New York Police Department's active shooter report (O'Neill, Miller, and Waters, 2016)
  • Citizens Crime Commission of New York City's "Mass Shooting Incidents in America" (Citizens Crime Commission of New York City, undated)

Terror- or Hate-Related Mass Attacks

  • National Consortium for the Study of Terrorism and Responses to Terrorism's (START's) Global Terrorism Database (START, undated)
  • Institute for Homeland Security Solutions TIPS Database
  • Sweeney and Perliger's Hate Crime Incident Database (Sweeney and Perliger, 2018)
  • Germain Difo's Assessment of Foiled Plots Since 9/11 (Difo, 2010)
  • Crenshaw, Dahl, and Wilson's Unsuccessful Terrorist Attacks Against the U.S. report (Crenshaw, Dahl, and Wilson, 2017)
  • Heritage Foundation's Foiled Terror Plots Since 9/11 database (Bucci, Carafano, and Zuckerman, 2012)
  • Anti-Defamation League's Terrorist Conspiracies by Right-Wing Extremists database (Anti-Defamation League, 2015)
  • Southern Poverty Law Center's Terror from the Right database (Southern Poverty Law Center, undated)
  • Wikipedia's List of Unsuccessful Terrorist Plots (Wikipedia, 2022a)

School-Based Mass Attacks

  • National Police Foundation's Averted School Violence Database (National Police Foundation, undated)
  • Naval Postgraduate School's Center for Homeland Defense and Security's K–12 School Shooting Database (Center for Homeland Defense and Security, undated)
  • Wikipedia's List of Unsuccessful Attacks Related to Schools (Wikipedia, 2022b)

Because of an inability to assess whether cases met the inclusion criteria, we did not include data from the FBI's Uniform Crime Reporting Program, the Centers for Disease Control and Prevention's National Violent Death Reporting System, or the Gun Violence Archive. We conducted additional data set and case identification for foiled plots occurring between 2016 and 2020.

Data Processing

After extracting all unique cases that met the definitional criteria from the data sets, we created custom search strings to conduct Google searches for any mass attacks or mass attack plots that existing databases might have missed. This process was useful in identifying cases of unsuccessful mass attacks (i.e., those in which a subject did not kill or injure at least four bystanders before the attack was thwarted), as well as failed or foiled plots that were never initiated. The search strings contained the following terms: "'at random' attack," "foiled attack," "prevented mass shooting," "mass attack prevented," "bombing prevented," "bombing plot," "mass attack," and "shooting plot." For 2020 specifically, we added the following search strings: "knife attack," "car attack," and "truck attack."

Finally, to further ensure that searches captured events that occurred in 2020 (the most recent year considered), we consulted the Gun Violence Archive and FBI press releases for that year.

Because the case-sampling strategy oversamples 2016–2020 (and especially 2020) to focus on recent developments in mass attacks and defenses, this data set should not be used to assess trends in the numbers of cases per year. RAND's Gun Policy in America website addresses trends in mass shootings on its research review page Mass Shootings in the United States (Smart and Schell, 2021).

To identify cases that met the definitions of mass attack and a completed attack, a failed attack, or a foiled plot, we conducted a brief review of each case in each of the 27 data sets and applied the inclusion criteria to filter cases into a new data set for further review and case coding. Within the first two groups of data sources—public mass shootings and terrorism-related mass attacks—we reached saturation well before reviewing every data set, which provided confidence in the number and representativeness of included cases. Given the large volume of school-based threats that are deemed credible by law enforcement and thus meet the project definition of a foiled plot, we did not reach saturation but obtained a large enough sample for analysis of our third group of data sources.

To code specific details about each identified case, we first collected sets of variables coded in previous studies and data sets that address mass attacks. We then collected input from team members to identify which variables to collect data on for each mass shooting event included in the database. We originally identified 93 variables across the following four categories:

  • subject (demographics, history, prior activities, planning and preparation)—44 total variables
  • attack (weapon characteristics, site characteristics)—13 variables
  • event (action characteristics, outcome characteristics)—ten variables
  • response (law enforcement and government response, bystander response, medical and other response, investigation)—26 variables.

To narrow down the list into a set of variables within the subject category and prioritize data collection, we rated each variable on a scale of 1 to 3 on the basis of observability, actionability, and predictability. We undertook a similar exercise to identify a short list of variables within the attack, event, and response categories, rating each variable on the basis of ease of data collection and impact on lethality (again on a 1–3 scale). We selected variables that consistently scored above the mean and median rating scores in each category, as well as above the mean and median scores of all 93 original variables. We collected information on a total of 33 variables included in the short list:

  • 17 subject variables
  • four attack variables
  • nine event variables
  • three response variables.

The project team created a data set and associated codebook (which are available upon request) that list all the aforementioned variables, definitions, variable types (e.g., categorical, integer, text), and the possible values for categorical variables. After identifying potential cases that appeared to meet the inclusion criteria, we spent an average of 20–25 minutes reviewing online news articles, reports, and data sets to collect information on each case. This step also involved further screening of cases based on the inclusion criteria. A program director with experience collecting data on mass attacks supervised four case coders and reviewed cases on a regular basis to ensure accuracy and consistency in case coding. In addition to conducting periodic case review, the program director assigned 15 test cases to each coder, and we met in a group to identify and discuss differences in coding decisions at the beginning of the project. Additionally, the coding team met regularly to discuss coding questions related to specific cases, the codebook, or inclusion criteria. In total, we coded 640 mass attack events.

After the case coding stage concluded, both RAND and RTI researchers selected a random sample of 50 coded cases and did an in-depth review of each case to identify incorrect values in each of the variables. RAND and RTI team members identified common data entry errors that were manually and programmatically corrected for the remaining cases. The initial clue and triggering clue variables for foiled plots were specifically examined to correct any coding errors.

Subsequent to this review, RAND analysts reviewed the set of cases and performed additional data-cleaning steps, including converting numeric variable values to plain text and updating a small number of incorrectly coded values. RAND analysts then used a series of analytic methods, primarily using the statistical software R, to create numeric and graphical summaries of key variables. Tables, along with such graphics as bar charts, histograms, and word clouds, were generated for researchers to incorporate into literature review and interview findings. We performed basic statistical tests, including analysis of variance and Chi-squared tests, to assess relationships between key variables.

RAND analysts also used artificial intelligence/machine learning (AI/ML) models on text descriptions within the case data to see whether it was possible to build meaningful models matching text content with an increased likelihood of casualties. To do this, we employed the AutoML system from H2O (H2O.ai, undated). This system generates testing and training data splits and searches through hundreds of algorithms to seek those with the best predictive accuracy, including deep learning, random forest, and linear models. The system also evaluates stacked ensemble combinations of the models that work best. In this analysis, deep learning and stacked ensemble models worked best, with a simple linear model also having top-ranked performance. However, we were not able to generate actionable findings from the models; our best interpretation of the models' results is that there were more data and longer descriptions on higher-casualty incidents. Thus, the results shown in the toolkit reflect much simpler analyses.

Literature Review

We reviewed more than 200 scholarly articles, guidance and training materials, and supporting tools related to preventing and defending against mass attacks. The literature search had the following two purposes:

  • to identify conclusions in prior literature that could directly inform findings and recommendations for this toolkit
  • to identify external resources that provide detailed, specialized information and guidance on specific steps of the Mass Attacks Defense Chain. The intent is that any individual or organization in need of more detail on a particular issue could access that information through resources linked from the toolkit. This category also included finding tools that help implement specific steps of the Mass Attacks Defense Chain (e.g., fillable forms to support threat assessment and follow-up actions).

Identifying Findings

For the first purpose (i.e., identifying conclusions in prior literature), we searched for journal articles and government and think tank reports that had findings that are directly relevant to one of the following core topics:

  • the most-relevant warning signs of a potential mass attack plot and how to assess them (e.g., findings on what should be reported to authorities)
  • the most-relevant factors that should be used in threat assessments and how to use them
  • factors associated with successful assessments and follow-up actions leading to stopped plots
  • factors associated with reduced casualties during attacks, including site security characteristics and measures, bystander actions, police response actions, medical treatment, and command and control actions.

Searching was carried out through a combination of (1) nominations by team members who follow mass shootings and counterterrorism literature and (2) internet literature. We prioritized peer-reviewed journal articles that included comparisons to control groups of nonshooters and nonattackers and then articles that at least noted the major false-positive challenges in this field. Thus, we placed less emphasis on articles that considered only a handful of exemplar cases and/or presented findings about indicators that apply to large percentages of the population (e.g., demographics, common mental health conditions, common personality traits). Key types of evidence include the following:

  • Factors linked with actual plots: Did the presence of the factor significantly change the probability that the subject was planning an attack, as opposed to being in a control group? (Use of this factor requires a control group in the source article.)
    • We also experimented with using information-gain calculations to assess, in an information-theoretic sense, the value of knowing that a given factor was present in determining whether a subject was in the attacker group, or the control group.
  • Factors linked with plots being foiled: Did the presence of the factor significantly change the relative probability that the plot was foiled successfully, as opposed to reaching execution?
  • Factors linked with increasing or decreasing casualties: Did the presence of the factor significantly increase or decrease the average casualties during a mass attack? This category included both simple statistical comparisons and regression models. It also included both empirical reviews of past mass attacks and laboratory experiments testing simulated shooters under varying conditions.

We also included some articles and guidance documents presenting findings based on the agency's (or authors') extensive case experience, with source data not provided for sensitivity reasons. These include major federal guidance documents, such as the interagency booklet Homegrown Violent Extremist Mobilization Indicators and the FBI Behavioral Analysis Unit's Making Prevention a Reality: Identifying, Assessing, and Managing the Threat of Targeted Attacks (FBI, National Counterterrorism Center, and U.S. Department of Homeland Security, 2019; Amman et al., 2016).

For each article or report, we captured specific findings on warning signs, indicators, or casualty-mitigating measures. We noted the specific step in the Mass Attacks Defense Chain to which they applied (e.g., initial detection, threat assessment) and captured the type and strength of evidence.

We further reviewed all potential indicators and factors in terms of their operational feasibility and suitability (e.g., for warning signs and indicators, they were behaviors that could be observed, were operationally meaningful and actionable, and had a direct nexus to mass attack preparation; for attack mitigators, they were security measures or procedures that were likely to be operationally feasible and suitable).

We did not observe directly conflicting findings between our own case analysis and the key findings we captured from the literature; our findings were largely consistent with those of the prior analyses.

Identifying External Resources

We found candidate external resources through a combination of nominations by research team members, nominations by our expert interviewees, nominations by our advisory panelists, and online searches. Resources were reviewed for

  • operational relevance at specific steps in the Mass Attacks Defense Chain
  • how operationally actionable the information contained in the resource was to support the implementation of a specific step
  • how widely applicable the resource was (e.g., guidance from federal agencies or professional associations intended for nationwide use was prioritized over guidance that was highly localized to a specific jurisdiction)
  • credibility, as assessed by expert review, evidence (e.g., citation) included in the resource, and consistency with findings from the scholarly literature and our case analysis.

We selected specific resources for the toolkit based on how operationally relevant they were in providing detailed information or tools in support of specific steps in the Mass Attacks Defense Chain. Our objective was to provide a core assortment of resources for each step that is operationally useful and comprehensive but not overwhelming.

Subject-Matter Expert Interviews and Analysis

Interviews were conducted with subject-matter experts across all levels of government and communities (e.g., law enforcement, private sector, religious institutions) who might have worked to prevent or respond to mass violence attacks to garner insight on prevention. Each initial interview was scheduled for one hour and included at least a primary interviewer and a primary notetaker. Follow-up interviews were scheduled in a few instances where there were specific programs to discuss in greater depth. We drafted an interview protocol for use across all interviews that focused on the following categories: indicators, identification, mobilizers, prevention, response, and false positives. The interview questions were developed by soliciting potential questions from the research team and compiling them into a structured approach that would lend the discussion notes to comparative analysis while allowing the flexibility for interviewees to share additional details on elements that were important to their mass violence preparedness, prevention, and response. The research team was made up of law enforcement and public safety practitioners and consultants, law enforcement criminal intelligence analysts, and researchers in mass violence and policing.

For the purposes of data collection and analysis, a dedicated notetaker took verbatim notes during each interview; team members who participated in the interviews then reviewed these notes to add any information that was not originally captured in the notes. Members of the team then coded each set of interview notes using an a priori coding structure aligned to research questions and the interview protocol. Working collaboratively, two team members developed a thematic synthesis using major codes and identified detailed codes. The interview materials were then recoded using these detailed codes, and team members drafted a synthesis of major themes for each interview using the detailed codes. The team organized data pertaining to each interviewee by major and detailed codes in a Microsoft Excel workbook to provide counts of responses on various topics, along with illustrative quotations, and to track relevant quotes that were specific to various topics.

Advisory Panel Review

Following the drafting of the toolkit, we assembled a panel of subject-matter experts and had members review pages from the toolkit using their expertise. We then met with panelists in subgroups by area of expertise (detection and threat assessment; civil, privacy, and legal considerations; schools and state and local governments; tactical and first responders; academia and business; and faith, community, and social services). The members of the advisory panel provided significant feedback on the structure, framing, and specific content. They also suggested external resources to include, which we incorporated in revisions to this toolkit.

Note on Post-Attack Findings

The Post-Attack phase (Phase III: Follow Up After the Attack) was not part of the original terms of the study; it was added as a result of expert interviews, when it became clear that post-attack actions needed to be added to the Mass Attacks Defense Chain to support community resilience to—and learning from—mass shootings and other mass attacks. Thus, the Post-Attack findings are based on the expert interviews and literature searches; the case data and analysis do not provide material on the Post-Attack phase.


This project is a collaboration among the RAND Corporation, RTI International, the LaFayette Group, and Karchmer Associates. We wish to thank the subject-matter experts who graciously contributed their time for interviews and feedback on this toolkit. This toolkit would not be possible without the benefit of insights from public safety, community health and well-being, education, government, policy, and civil rights representatives. We also wish to thank the peer reviewers of this toolkit, whose recommendations led to significant improvements in the tool's utility. Finally, we also wish to thank the National Institute of Justice for the opportunity to advance understanding of prevention and mitigation of mass attacks.

Feedback and Data Requests

We are interested to hear how you're using the toolkit, and how you think we can improve it. Please use the below form to send us your comments.

The complete quantitative data, and associated codebook used in this study are available on request.


The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. Funding for this research initiative was provided through the National Institute of Justice via the Investigator-Initiated Research and Evaluation on Firearm Violence (NIJ-2019-15288) solicitation.

Justice Policy Program

RAND Social and Economic Well-Being is a division of the RAND Corporation that seeks to actively improve the health and social and economic well-being of populations and communities throughout the world. This research was conducted in the Justice Policy Program within RAND Social and Economic Well-Being. The program focuses on such topics as access to justice, policing, corrections, drug policy, and court system reform, as well as other policy concerns pertaining to public safety and criminal and civil justice. For more information, email justicepolicy@rand.org.


  • Advanced Law Enforcement Rapid Response Training, "ALERRT Active Attack Data," webpage, undated. As of January 4, 2022:
  • ALERRT—See Advanced Law Enforcement Rapid Response Training.
  • Amman, Molly, Matthew Bowlin, Lesley Buckles, Kevin C. Burton, Kimberly F. Brunell, Karie A. Gibson, Sarah H. Griffin, Kirk Kennedy, and Cari J. Robins, Making Prevention a Reality: Identifying, Assessing, and Managing the Threat of Targeted Attacks, Washington, D.C.: Federal Bureau of Investigation, National Center for the Analysis of Violent Crime, Behavioral Analysis Unit, November 2016.
  • Anti-Defamation League, Terrorist Conspiracies, Plots and Attacks by Right-Wing Extremists, 1995–2015, New York, 2015.
  • Blair, J. Pete, and Katherine W. Schweit, A Study of Active Shooter Incidents, 2000–2013, Washington, D.C.: Federal Bureau of Investigation, 2014.
  • Bucci, Steven, James Carafano, and Jessica Zuckerman, Fifty Terror Plots Foiled Since 9/11: The Homegrown Threat and the Long War on Terrorism, Washington, D.C.: Heritage Foundation, April 25, 2012.
  • Center for Homeland Defense and Security, "K–12 School Shooting Database," webpage, undated. As of January 4, 2022: https://www.chds.us/ssdb/
  • Citizens Crime Commission of New York City, "Mass Shooting Incidents in America (1984–2012)," undated. As of January 4, 2022: http://www.nycrimecommission.org/mass-shooting-incidents-america.php
  • Crenshaw, Martha, Erik Dahl, and Margaret Wilson, Comparing Failed, Foiled, Completed and Successful Terrorist Attacks: Final Report Year 5, College Park, Md.: National Consortium for the Study of Terrorism and Responses to Terrorism, December 2017.
  • Crime Prevention Research Center, "Mass Public Shooting Cases 1998 Through May 2021," Microsoft Excel spreadsheet, undated.
  • Difo, Germain, Ordinary Measures, Extraordinary Results: An Assessment of Foiled Plots Since 9/11, Washington, D.C.: American Security Project, May 2010.
  • Duwe, Grant, "Patterns and Prevalence of Lethal Mass Violence," Criminology & Public Policy, Vol. 19, No. 1, 2020, pp. 17–35.
  • Everytown for Gun Safety Support Fund, "Ten Years of Mass Shootings in the United States: An Everytown for Gun Safety Support Fund Analysis," November 21, 2019. As of January 4, 2022: https://everytownresearch.org/massshootingsreports/mass-shootings-in-america-2009-2019/
  • FBI—See Federal Bureau of Investigation.
  • Federal Bureau of Investigation, "Active Shooter Resources," webpage, undated. As of January 4, 2022: https://www.fbi.gov/about/partnerships/office-of-partner-engagement/active-shooter-resources
  • Federal Bureau of Investigation, Active Shooter Incidents: 20-Year Review, 2000–2019, Washington, D.C.: U.S. Department of Justice, May 2021.
  • Federal Bureau of Investigation, National Counterterrorism Threat Center, and U.S. Department of Homeland Security, Homegrown Violent Extremist Mobilization Indicators, Washington, D.C., 2019.
  • Follman, Mark, Gavin Aronsen, and Deanna Pan, "US Mass Shootings, 1982–2021: Data from Mother Jones' Investigation," webpage, updated November 30, 2021. As of January 3, 2022: https://www.motherjones.com/politics/2012/12/mass-shootings-mother-jones-full-data/
  • H2O.ai, "H2O AutoML," webpage, undated. As of April 15, 2022: https://h2o.ai/platform/h2o-automl/
  • Lankford, Adam, and James Silver, "Why Have Public Mass Shootings Become More Deadly? Assessing How Perpetrators' Motives and Methods Have Changed Over Time," Criminology & Public Policy, Vol. 19, No. 1, 2020, pp. 37–60.
  • Lott, John R., Jr., and Carlisle E. Moody, "Is the United States an Outlier in Public Mass Shootings? A Comment on Adam Lankford," Econ Journal Watch, Vol. 16, No. 1, March 2019, pp. 37–68.
  • Mayors Against Illegal Guns, Analysis of Recent Mass Shootings, New York, September 2013.
  • National Consortium for the Study of Terrorism and Responses to Terrorism, "Global Terrorism Database," webpage, undated. As of January 4, 2022: https://www.start.umd.edu/gtd/
  • National Police Foundation, "Averted School Violence," webpage, undated. As of January 4, 2022: https://www.avertedschoolviolence.org/
  • National Threat Assessment Center, Mass Attacks in Public Spaces: 2017, Washington, D.C.: U.S. Secret Service, U.S. Department of Homeland Security, March 2018.
  • National Threat Assessment Center, Mass Attacks in Public Spaces: 2018, Washington, D.C.: U.S. Secret Service, U.S. Department of Homeland Security, July 2019.
  • National Threat Assessment Center, Mass Attacks in Public Spaces: 2019, Washington, D.C.: U.S. Secret Service, U.S. Department of Homeland Security, August 2020.
  • O'Neill, James P., John J. Miller, and James R. Waters, Active Shooter: Recommendations and Analysis for Risk Mitigation, New York: New York City Police Department, 2016.
  • Smart, Rosanna, and Terry L. Schell, "Mass Shootings in the United States," webpage, April 15, 2021. As of January 4, 2022: https://www.rand.org/research/gun-policy/analysis/essays/mass-shootings.html
  • Southern Poverty Law Center, "Terror from the Right," webpage, undated. As of January 4, 2022: https://www.splcenter.org/terror-from-the-right
  • Stanford Geospatial Center, "Mass Shootings in America," Stanford, Calif.: Stanford University Libraries, undated. As of January 4, 2022: https://library.stanford.edu/projects/mass-shootings-america
  • START—See National Consortium for the Study of Terrorism and Responses to Terrorism.
  • Sweeney, Matthew M., and Arie Perliger, "Explaining the Spontaneous Nature of Far-Right Violence in the United States," Perspectives on Terrorism, Vol. 12, No. 6, December 2018, pp. 52–71.
  • Violence Policy Center, "Concealed Carry Killers," webpage, updated September 27, 2021a. As of January 4, 2022: http://concealedcarrykillers.org/
  • Violence Policy Center, Large Capacity Ammunition Magazines, Washington, D.C., updated November 1, 2021b.
  • The Violence Project, "Mass Shooter Database," web tool, version 2, undated. As of January 4, 2022: https://www.theviolenceproject.org/mass-shooter-database/
  • Wikipedia, "List of Unsuccessful Terrorist Plots in the United States Post-9/11," webpage, updated January 2, 2022a. As of January 4, 2022: https://en.wikipedia.org/wiki/List_of_unsuccessful_terrorist_plots_in_the_United_States_post-9/11
  • Wikipedia, "List of Unsuccessful Attacks Related to Schools," webpage, updated January 7, 2022b. As of January 7, 2022: https://en.wikipedia.org/wiki/List_of_unsuccessful_terrorist_plots_in_the_United_States_post-9/11