Concurrent Policies Simulation Tool

Good policymaking depends on reliable evaluations of state-level policies. But reliable evaluations face multiple barriers, including whether statistical models can disentangle the effects of concurrently implemented policies.

To test the performance of commonly used models, we conducted simulations to assess how concurrently implemented policies affected the evaluation of hypothetical state-level policies. We obtained outcome data (annual opioid mortality rate per 100,000 residents, by state) from the National Vital Statistics System Multiple Cause of Death mortality files, from 1999 to 2016, for all 50 states. In each simulation, we modified the outcome data by adding in the hypothetical effect of two co-occurring policies, and then we examined how effective the models were at estimating the policy effects. We implemented this over a range of simulation parameters, such as the effect sizes of the policies and the length of time between their enactment dates.

This tool allows users to explore how the different models perform under varying policy scenarios. More information about the simulation parameters and model performance metrics is available below the displayed results.

Simulation Parameters

Model
The type of statistical model used to evaluate the policies.
Specification
A correctly specified model includes the secondary policy, while a misspecified model omits the secondary policy.
N Treated Units
The number of states enacting policies.
Ordered Policies
Was the primary policy always implemented before the secondary policy?
Effect Size
Effect sizes of the two policies are shown as percentages, relative to the average state-level value of the outcome variable (opioid mortality rate). For example, the –10% / –10% option captures scenario results in which both policies decrease the opioid mortality rate by 10 percent once enacted.
Policy Implementation Speed
Did the policy have its full effect immediately after implementation (“instant”), or did it gradually ramp up over a period of three years (“slow”)?
Rho
Correlation of the enactment times of the two policies. The greater the correlation, the closer together the policies' enactment dates will be.

Model Performance Metrics

Bias (or Relative Bias when comparing models)
The tendency of the model’s estimated effects to fall closer to or further from the true effect. A good model should minimize bias.
Coverage
The ability of estimated confidence intervals to cover (i.e., contain) the true effect in repeated samples (i.e., the percentage of times that the true effect falls within the estimated confidence interval).
RMSE (root mean squared error)
The error for a given model, taking into account both directional bias and variance. A good model will have low RMSE.
Type 1 Error
The frequency with which the model incorrectly identifies a positive effect when there is none (i.e., a false positive). A good model should give false positives only about 5 percent of the time because all models assume a 5-percent significance level.
Variance
The variability around the model’s estimated effects for each policy. A good model will tend to have small variance.