🤖 AI Summary
Middle school students struggle to effectively employ evidence and construct logical arguments in argumentative writing, while teachers face challenges providing timely, actionable feedback to support targeted revision. Method: This study develops the first automated formative assessment system focused on *revision behaviors*—rather than final output quality—integrating natural language processing, sequence alignment, and random forest classification to model draft-to-revision differences, identify revision intents, and quantify responsiveness to feedback. Contribution/Results: The system introduces three novel fine-grained assessments: (1) evidence usage quality, (2) logical chain revision types, and (3) feedback adoption efficacy. Empirically validated across three K–12 schools with six teachers and 406 students, it accurately detects evidence deficiencies and revision actions, significantly improving students’ argumentative writing proficiency (p < 0.01). The approach delivers interpretable, intervention-ready support for process-oriented writing instruction.
📝 Abstract
The ability to revise essays in response to feedback is important for students' writing success. An automated writing evaluation (AWE) system that supports students in revising their essays is thus essential. We present eRevise+RF, an enhanced AWE system for assessing student essay revisions (e.g., changes made to an essay to improve its quality in response to essay feedback) and providing revision feedback. We deployed the system with 6 teachers and 406 students across 3 schools in Pennsylvania and Louisiana. The results confirmed its effectiveness in (1) assessing student essays in terms of evidence usage, (2) extracting evidence and reasoning revisions across essays, and (3) determining revision success in responding to feedback. The evaluation also suggested eRevise+RF is a helpful system for young students to improve their argumentative writing skills through revision and formative feedback.