Based on the results of statewide standardized tests, more than 15 percent of U.S. schools are in need of improvement. The students attending these schools need help.
Under the federal No Child Left Behind Act, billions of dollars have been dedicated to providing them with better educational opportunities. Up to 20 percent of districts' Title I funds, for example, must be set aside to transport such students to higher-performing schools, or to provide their parents with the option of enrolling them, at no cost to the family, in supplemental educational services chosen from a list of state-approved providers. The providers may be for-profit companies, nonprofit organizations, or even school districts themselves, and they can offer tutoring, remediation, or other academic instruction.
In this era of accountability and competition, one might expect that providers of supplemental educational services, or SES, would be evaluated according to how well they help the students they are paid to serve. But as it stands now, NCLB emphasizes ensuring that all providers have access to parents, over rigorous evaluation of the providers.
Unfortunately, states are effectively precluded from conducting the most rigorous evaluations of whether supplemental educational services benefit students, and, if so, which providers are most (and least) effective.
The No Child Left Behind law effectively bars states from employing one of the most important techniques for evaluating the effectiveness of a tutor, a new medicine, or any other intervention: randomization. Allowing states to randomly assign the children of consenting parents to a specific SES provider among those available, or to receive no supplemental educational services, would add greatly to what can now be inferred about provider quality. While the prohibition against this evaluation method may have been put in place to promote parent choice, the result has been that parents lack sufficient information from which to choose a provider. And it is unlikely that providers have strong incentives to compete on quality with the minimum level of reporting now required.
In other contexts, so-called social experiments of this kind have not only been permitted by states, but congressionally mandated as well. The Housing Assistance Supply Experiment, funded by the U.S. Department of Housing and Urban Development in the mid-1970s, showed policymakers that cash housing allowances benefited the neediest families more than constructing public housing, and cost less. Similarly, the Health Insurance Experiment of that decade, also federally funded, randomly assigned families to insurance plans ranging from free care to 95 percent payment. It found that families with free care used 50 percent more health services than those in cost-sharing arrangements or health maintenance organizations, with negligible benefits to the average person's health.
Such experiments provide critical insight into what social policies work, and have been instrumental in improving policy.
There are other ways evaluators try to compare groups of people who do and do not receive services. For example, although on average fewer than 20 percent of students eligible for supplemental services sign up to receive them, a small number of school districts are oversubscribed—more students sign up than can be accommodated. In such a district, one can compare changes in achievement outcomes for students selected to receive services with those who signed up but were not served.
Such a design is being used for a national evaluation of supplemental-services providers. But ultimately, it is not clear how the results from these few districts, where SES providers may be much better or parents and students more motivated, can tell us about the effectiveness of supplemental services in the majority of school districts.
The pending reauthorization of the Elementary and Secondary Education Act—the law that's been known as No Child Left Behind since its last revision eight years ago—provides an excellent opportunity to set up a powerful system to evaluate supplemental-services providers, one modeled on the best social experiments of the 1970s.
Congress could permit, incentivize, or mandate state demonstration projects so that states could more easily use some of their SES funds for rigorous comparative-effectiveness research. During the initial phase of such a project, rather than allowing parents to choose among tutors, students could be randomly assigned to a certified provider offering services in the area. Changes in their academic results would then be compared with those of other groups of students.
Congress should, in addition, allocate financial resources to enable states to undertake rigorous evaluations of SES providers. This simple, yet powerful, evaluation method using random assignment, combined with other approaches, would not deny tutoring to students, but would tell us if better performance of the students assigned to one provider is due to the provider, or if some providers merely attract better students. In the end, all students would benefit.
Megan Beckett is a sociologist at the RAND Corp., a nonprofit policy-research institution with headquarters in Santa Monica, Calif.
This op-ed originally appeared on edweek.org on January 20, 2010.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.