The COVID-19 pandemic has created an unprecedented set of obstacles for schools and exacerbated existing structural inequalities in public education. While preliminary accounts highlight the impact of COVID-19 on a variety of student outcomes, it may take years to unpack how the pandemic affected student learning and social and emotional development and to identify any lasting effects on low-income communities and communities of color.
COVID-19 also presents immediate decisionmaking challenges for schools and districts that rely on annual data collection to monitor student progress and performance. Such assessments (PDF) are used for a wide variety of purposes, including school and teacher accountability, organizing professional development, supporting formal and informal program evaluation, determining course placements and access to additional supports, and to support instructional decisionmaking.
The nonprofit RAND Corporation, in collaboration with NWEA, has initiated a research project to provide key guidance on the use of assessment data for schools and school districts in their response to the COVID-19 pandemic. Specifically, our work focuses on developing strategies for addressing the impacts of COVID-19 disruptions on student assessment programs, providing empirical evidence of the strengths and limitations of particular strategies for making decisions in the absence of assessment data, and providing an investigative framework that guides plans for collecting evidence that supports strategy adoption.
COVID-19 presents immediate decisionmaking challenges for schools and districts that rely on annual data collection to monitor student progress and performance.
Share on TwitterAs we embark on this project, we have solicited input from local education agencies across the country, using a combination of surveys and structured interviews, to capture districts' needs during this dynamic period. While we caution that our research is in its preliminary stages and our sample of respondents is not nationally representative, the initial feedback focused attention on a few key concerns around decisionmaking in the absence of spring assessment data.
School Systems Rely on Spring Assessment Data to Inform Course Placement Decisions
Established processes that school systems use for determining course placement were disrupted by the lack of spring testing. For example, we heard that administrators typically rely on spring assessment scores—often in conjunction with other assessment information, course grades, and teacher recommendations—to make determinations for course placements, such as who should enroll in accelerated or advanced mathematics classes.
School Systems Rely on Spring Assessment Data to Evaluate Programs or District-Wide Initiatives
Schools and school systems frequently adopt programs, pilot interventions, or purchase support materials to improve student social, emotional, and academic outcomes, or to address disparities in student success. We heard that many districts monitor the success of these programs internally by looking at year-to-year change or growth for schools or subgroups of interest. The gap in the data caused by assessment disruptions presents schools and school systems with a challenge for how to look at growth or change across time, especially as COVID-19 has differential impacts on student subgroups.
School Systems Responded to These Challenges in a Variety of Ways
The most striking pattern thus far is the variation in school system responses, often driven by differences in local contexts and resources. For some schools, the shift to remote learning and assessment was an extension of recent difficulties with online assessment system adoption, which resulted in large numbers of missing or invalid tests. In other places where online assessments weren't feasible in spring 2020, school systems used older data to make course recommendations, either from the winter or in some cases, from the previous year (i.e. 2018-19 school year). Related to this idea, we heard of other school systems that relaxed typical practices that relied on annual assessment data to provide more autonomy to individual schools, relying on school staff to exercise local judgment around course placements. Finally, other systems reported using more complex imputation models, projecting student scores based on student assessment histories. While by no means exhaustive, these recorded examples run the gamut of experiences and responses by school systems throughout the country.
School Systems Have Concerns About Whether Data From Remotely Administered Diagnostic or Interim Assessments This Fall Can Be Used for Instructional Decisionmaking or to Understand the Impacts of COVID-19 on Student Learning
Several respondents to our survey noted concerns about the trustworthiness of remote assessment data collected this fall and the extent to which results could be interpreted as valid indicators of student achievement or understanding. Particularly for districts that started the 2020-21 school year remotely, respondents were concerned about student engagement and motivation, as well as the possibility of students rushing assessments, running into technological or internet barriers, or seeking assistance from guardians or other resources.
In a typical school year some students will not take the end of year assessments, but these missing scores are usually treated as missing at random. However, given the broader context of missing scores and increased prevalence, several respondents raised questions about the extent to which available assessment scores are representative of school or district performance as a whole. Given that vulnerable students (e.g., students with disabilities, students experiencing homelessness) may be the least likely to have access to remote instruction and assessments, it is likely that the students who are not assessed this year systematically vary from students who are able to be assessed. Still other respondents noted that they encountered resistance from parents around fall assessment, and prioritized student well-being (e.g., their safety, sense of community, and social and emotional well-being) more so than academics, a perspective that resonates with recent findings from a nationally representative sample of teachers and school leaders drawn from RAND's American Educator Panel (AEP).
It is likely that the students who are not assessed this year systematically vary from students who are able to be assessed.
Share on TwitterThe next phase of our research focuses on conducting a series of simulation studies and empirical applications of four of the most commonly cited strategies that our district respondents indicated they were using to make course placement decisions and to evaluate programs or district-wide initiatives. While we focus these empirical analyses in specific use-cases, we believe that our framework provides guidance for local investigations of the intended (and unintended) consequences for school and school system decisionmaking related to COVID-19 disruptions.
As we write this commentary, the pandemic continues to wreak havoc on communities and school systems around the country, making clear that schools may have to adapt practices for the foreseeable future. Even if assessment data is available in Spring 2021, there are serious questions about how comparable these scores will be to prior year scores, and the extent to which they reflect inequities in opportunity to learn. Several school districts reported higher-than-typical course failure rates in middle school and high school. This is consistent with the AEP finding that less than one in four teachers reported that all or nearly all of their students were completing assignments (and only nine percent of teachers in schools serving high percentages of low-income students or students of color).
We welcome individuals to reach out to RAND with additional recommendations or considerations. We are also interested in hearing how districts are approaching course placement, accountability, and program evaluation across the country. Connect with the research team via email at jschweig@rand.org
Jonathan Schweig is a social scientist and Andrew McEachin is a senior policy researcher at the nonprofit, nonpartisan RAND Corporation and both are professors at the Pardee RAND Graduate school. Megan Kuhfeld is a researcher at NWEA.
A version of this commentary appeared on ies.ed.gov on December 16, 2020.
Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.