In this report, part of the RAND Gun Policy in America initiative, the authors discuss four common methodological problems that they observed in the literature evaluating gun policies and offer suggestions for how future research on gun policies could be improved.
- What are some of the methodological problems that have contributed to confusion about the true effects of gun policies?
- What alternative methods might overcome these problems and thus produce more-credible and more-reliable information about gun policy?
Research on gun policy topics has often been controversial, partly because different researchers studying the same questions—and typically using the same or similar data sets—have often reported contradictory findings, which leads to confusion about the merits of the policy being studied. One potential explanation is that different researchers may be using methods that are more or less appropriate to the gun policy topics they are investigating.
In this report, part of the RAND Gun Policy in America initiative, the authors discuss four common methodological problems that they observed in the literature evaluating gun policies and offer suggestions for how future research on gun policies could be improved. In presenting these ideas, the authors hope to improve awareness of some of the weaknesses with commonly used methods for estimating gun policy effects, stimulate debate about how best to address some of these limitations, and encourage reviewers of research to advocate for stronger methods prior to accepting papers for publication.
- The suggestions for addressing the four common methodological problems discussed in this analysis are improving the definition and measurement of policy effects of interest, reducing bias through more-careful consideration of confounding factors, ensuring that key model assumptions are met, and avoiding statistical models with low statistical power.
- The authors highlight evidence suggesting that many of the results from the best studies of gun policy were based on methods that reject the null hypothesis too frequently, had so little power that the published effects may not describe the true effect, and used methods likely to produce biased estimates of the policy effect.
- Researchers should be explicit about which U.S. states they consider to have a given policy; when they assume that policy began having an effect; and how, if at all, they distinguish between case law and statutory laws.
- Researchers should avoid using spline models that assume that the effect of the policy increases linearly in perpetuity and should explicitly justify the period over which a policy's effects are expected to grow.
- Researchers should attempt to control for all potential confounders that are likely to affect both the treatment selection (e.g., adoption of a policy) and the outcome but that are not themselves likely to be affected by the treatment.
- Researchers should attempt to use parsimonious models of gun policy effects and avoid controlling for more covariates than can be well estimated with the available data. Overly complex models should be simplified.
- Researchers using synthetic control methods should consider the extent of bias in their estimates that could occur from model overfit or imperfect balance on pre-treatment outcomes and predictors between the treated and synthetic control groups.
- Researchers using synthetic control methods with one or few treated units should be cognizant that the commonly used permutation tests, as well as more-novel inferential procedures, rely on assumptions about exchangeability that are unlikely to hold in most gun policy evaluation contexts unless the potential control units are restricted to those that more closely resemble the treated unit or units.
- Researchers should consider alternatives to the null-hypothesis testing framework, such as Bayesian estimation and statistical inference, except when the researchers can demonstrate that they have adequate power to reject the null hypothesis using standard significance testing.
- Researchers and reviewers should generally consider state-level studies with a single treated state as exploratory analyses with unknown generalizability or statistical significance.