Cover: Identifying Optimal Methods for Addressing Confounding Bias When Estimating the Effects of State-level Policies

Identifying Optimal Methods for Addressing Confounding Bias When Estimating the Effects of State-level Policies

Published in: Epidemiology, Volume 34, No. 6, pages 856-864 (November 2023). DOI: 10.1097/EDE.0000000000001659

Posted on rand.org Apr 17, 2024

by Beth Ann Griffin, Megan S. Schuler, Elizabeth Stone, Stephen W. Patrick, Bradley D. Stein, Pedro Nascimento de Lima, Max Griswold, Adam Scherling, Elizabeth A. Stuart

Background

Policy evaluation studies that assess how state-level policies affect health-related outcomes are foundational to health and social policy research. The relative ability of newer analytic methods to address confounding, a key source of bias in observational studies, has not been closely examined.

Methods

We conducted a simulation study to examine how differing magnitudes of confounding affected the performance of 4 methods used for policy evaluations: (1) the two-way fixed effects difference-in-differences model; (2) a 1-period lagged autoregressive model; (3) augmented synthetic control method; and (4) the doubly robust difference-in-differences approach with multiple time periods from Callaway-Sant'Anna. We simulated our data to have staggered policy adoption and multiple confounding scenarios (i.e., varying the magnitude and nature of confounding relationships).

Results

Bias increased for each method: (1) as confounding magnitude increases; (2) when confounding is generated with respect to prior outcome trends (rather than levels), and (3) when confounding associations are nonlinear (rather than linear). The autoregressive model and augmented synthetic control method had notably lower root mean squared error than the two-way fixed effects and Callaway-Sant'Anna approaches for all scenarios; the exception is nonlinear confounding by prior trends, where Callaway-Sant'Anna excels. Coverage rates were unreasonably high for the augmented synthetic control method (e.g., 100%), reflecting large model-based standard errors and wide confidence intervals in practice.

Conclusions

In our simulation study, no single method consistently outperformed the others, but a researcher's toolkit should include all methodologic options. Our simulations and associated R package can help researchers choose the most appropriate approach for their data.

Research conducted by

This report is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.