Assessing Students' Use of Evidence and Organization in Response-to-Text Writing
Using Natural Language Processing for Rubric-Based Automated Scoring
Published in: International Journal of Artificial Intelligence in Education, Volume 27, Issue 4 (December 2017), pages 694-728. doi: 10.1007/s40593-017-0143-2
Posted on RAND.org on December 05, 2017
Read MoreAccess further information on this document at International Journal of Artificial Intelligence in Education
This article was published outside of RAND. The full text of the article can be found at the link above.
This paper presents an investigation of score prediction based on natural language processing for two targeted constructs within analytic text-based writing: 1) students' effective use of evidence and, 2) their organization of ideas and evidence in support of their claim. With the long-term goal of producing feedback for students and teachers, we designed a task-dependent model, for each dimension, that aligns with the scoring rubric and makes use of the source material. We believe the model will be meaningful and easy to interpret given the writing task. We used two datasets of essays written by students in grades 5-6 and 6-8. Our experimental results show that our task-dependent model (consistent with the rubric) performs as well as if not outperforms competitive baselines. We also show the potential generalizability of the rubric-based model by performing cross-corpus experiments. Finally, we show that the predictive utility of different feature groups in our rubric-based modeling approach is related to how much each feature group covers a rubric's criteria.