Predicting the Quality of Revisions in Argumentative Writing

Zhexiong Liu, Diane Litman, Elaine Lin Wang, Lindsay Clare Matsumura, Richard Correnti

ResearchPosted on rand.org Nov 30, 2023Published in: Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications, pages 275–287 (July 2023). doi: 10.18653/v1/2023.bea-1.24

The ability to revise in response to feedback is critical to students' writing success. In the case of argument writing in specific, identifying whether an argument revision (AR) is successful or not is a complex problem because AR quality is dependent on the overall content of an argument. For example, adding the same evidence sentence could strengthen or weaken existing claims in different argument contexts (ACs). To address this issue we developed Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality predictions. The experiments on two corpora, our annotated elementary essays and existing college essays benchmark, demonstrate the superiority of the proposed ACs over baselines.

Topics

Document Details

  • Availability: Non-RAND
  • Year: 2023
  • Pages: 13
  • Document Number: EP-70320

Research conducted by

This publication is part of the RAND external publication series. Many RAND studies are published in peer-reviewed scholarly journals, as chapters in commercial books, or as documents published by other organizations.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.