High school represents a critical time in students' academic, social, and emotional development. While many students navigate high school successfully, others struggle. Every year, millions of students fail to succeed academically or drop out of school entirely. Many high schools in high-poverty areas struggle to prepare students for graduation within four years. These issues were concerning even before the COVID-19 pandemic began and there is evidence that the pandemic has exacerbated these problems.
However, relatively few interventions suited for high schools are supported by evidence of positive effects; those that are tend to focus on 9th and 10th grades. The Center for Research and Reform in Education reviewed 30 mathematics programs and found only five with evidence of effective implementation. In a comprehensive RAND review of Social and Emotional Learning interventions that met ESSA evidence standards, only eight focused on high school students and none of those were supported by rigorous evidence.
The ultimate goal of every education intervention is to help enable students to achieve their fullest potential. Test scores and other measures used in research are proxies for adult outcomes too far in the future to feasibly measure. But two factors differentiate high school studies from those of earlier grades.
First, outcomes are measured with less regularity and uniformity. In lower grades, common assessments are administered to all students each year. In contrast, high school assessments do not follow a uniform administration schedule and assessments are typically tied to courses rather than to specific grades. Although researchers do not have to rely on routine assessment schedules to measure outcomes, administering supplemental assessments to students places additional burdens on participants and research budgets.
Relatively few interventions suited for high schools are supported by evidence of positive effects.Share on Twitter
Second, measures of early adult outcomes are more feasible, although most studies lack time and resources to capitalize by tracking students and gathering measures such as college persistence or employment.
These differences pose obstacles to research and evaluation and may explain the lack of evidence for high school initiatives.
We confronted these challenges in our recent five-year study of Carnegie Corporation of New York's Opportunity by Design initiative (ObD). The initiative aimed to create high schools designed to help underprepared students catch up and graduate in four years with the academic and social and emotional skills needed for postsecondary success. Although teachers in ObD schools reported more extensive use of innovative instructional approaches than teachers nationally, the study did not find discernible improvement on available student outcome measures. There are many plausible substantive explanations for these findings (described in the project report), but it is also possible that these measurement issues hampered our ability to capture potential ObD effects.
Each of the school districts in this study administered different examinations. We handled these differences by focusing on the subjects and courses in which a majority of students were tested, and employed an alternative measure of student achievement, the MAP, to assess student growth. It was beyond the scope of the study to follow students long enough to obtain information about postsecondary success. Thus, we relied on consistently available measures, such as SAT scores and credit accumulation, as proxies for important postsecondary outcomes.
As educators implement new programs intended to support learning recovery, address trauma, and promote social emotional learning among high school students, research that is able to provide sound evidence about effectiveness is more important than ever. To help build an evidence base of successful high school programs, we offer the following recommendations:
Support longer-term research and evaluation. Researchers should design studies to include measures of adult outcomes. State policymakers should ease researcher access to longitudinal education data systems and link those systems to others tracking employment, welfare, income, social services, and the justice system. School systems should think through the nature and duration of commitment to programs and program implementation, balancing the need for immediate school improvement with the need for a more comprehensive understanding of longer-term impacts. Funders should consider supporting longer project durations and higher funding limits.
Researchers should develop consensus on best-practices for addressing the complexity of measuring high school outcomes. There is little guidance on working with the irregular schedules and diverse content of high school assessments, particularly across districts or states. In contrast to the approach we used in the ObD study, some studies use statistical models that retain all information from the diverse tests, and others rely on administering assessments. Each of these approaches has strengths and limitations; researchers focused on high school could pool their knowledge to develop guidance for the field.
Given all of the challenges, limited evidence on high school programs is unsurprising. We believe stronger evidence can be built through collaborative efforts of researchers, state and local education agencies, and funders.
Jonathan Schweig is a social scientist, Elizabeth D. Steiner is a policy researcher, and John F. Pane is a senior scientist at the nonprofit, nonpartisan RAND Corporation.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.