Automated Scoring of Students' Use of Text Evidence in Writing

Published in: Reading Research Quarterly (2019). doi: 10.1002/rrq.281

Posted on on February 26, 2020

by Richard Correnti, Lindsay Clare Matsumura, Elaine Lin Wang, Diane Litman, Zahra Rahimi, Zahid Kisa

Read More

Access further information on this document at International Literacy Association

This article was published outside of RAND. The full text of the article can be found at the link above.

Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that are feasible to implement at scale. Automated essay-scoring (AES) technologies offer one potential approach to increasing the feasibility of research in this area, provided that the scores yield information about substantive dimensions of writing aligned to new standards and are sensitive to variation in literacy instruction. The authors describe an approach to using AES technologies to provide information about students' skills at marshaling text evidence in the upper elementary grades. Specifically, the authors examined 1,529 responses to a response-to-text assessment (RTA) from 65 fifth- and sixth-grade language arts classrooms, from which the authors also collected data on instruction via logs, text-based writing assignments, and surveys. Through correlational, univariate, and multilevel multivariate analyses, the authors found validity evidence supporting automated scoring of the RTA: The authors found close correspondence of human and AES scores, alignment of AES scores with components of instruction that the authors expected would predict variation in students' writing quality, and association between AES scores and other expected measures of student achievement. These findings provide encouraging evidence that AES technologies as applied to the RTA can generate valid inferences about students' ability to marshal text evidence in writing and, thus, could be a useful tool for advancing large-scale writing research.

Research conducted by

This report is part of the RAND Corporation External publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highest level of integrity and ethical behavior. To help ensure our research and analysis are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid both the appearance and reality of financial and other conflicts of interest through staff training, project screening, and a policy of mandatory disclosure; and pursue transparency in our research engagements through our commitment to the open publication of our research findings and recommendations, disclosure of the source of funding of published research, and policies to ensure intellectual independence. For more information, visit

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.