Can Broad Inferences Be Drawn from Lottery Analyses of School Choice Programs?

An Exploration of Appropriate Sensitivity Analyses

Published in: Journal of School Choice: International Research and Reform, v. 10, no. 1, Mar. 2016, p. 48-72

Posted on RAND.org on April 01, 2016

by Ron Zimmer, John Engberg

Read More

Access further information on this document at Journal of School Choice: International Research and Reform

This article was published outside of RAND. The full text of the article can be found at the link above.

Research Question

  1. How can researchers test their analyses of school choice programs for indications that they, or media and policy stakeholders, should be careful about making broad inferences from the results?

School choice programs continue to be controversial, spurring a number of researchers into evaluating them. When possible, researchers evaluate the effect of attending a school of choice using randomized designs to eliminate possible selection bias. Randomized designs are often thought of as the gold standard for research, but many circumstances can limit external validity of inferences from these designs in the context of school choice programs. In this article, we examine whether these limitations are applicable to previous evaluations of voucher, charter schools, magnet, and open-enrollment programs. We devise simple sensitivity analyses that researchers could conduct when analyzing lotteried programs to determine whether there are reasons to be cautious about the breadth of appropriate inferences.

Key Findings

  • Studies of school choice programs that use a lottery-based admission process frequently are susceptible to misinterpretation.
  • Some journalists, policymakers, and other stakeholders may not understand the limitations of a given study and erroneously state that its findings apply to other types of students or schools.
  • Researchers should evaluate their studies for issues with external validity and more clearly state whether caution is needed in making inferences from the results.

Recommendation

Researchers should evaluate whether their findings are externally valid by using the sensitivity analysis tools discussed in the article.

This report is part of the RAND Corporation External publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highest level of integrity and ethical behavior. To help ensure our research and analysis are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid both the appearance and reality of financial and other conflicts of interest through staff training, project screening, and a policy of mandatory disclosure; and pursue transparency in our research engagements through our commitment to the open publication of our research findings and recommendations, disclosure of the source of funding of published research, and policies to ensure intellectual independence. For more information, visit www.rand.org/about/principles.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.