🤖 AI Summary
This study investigates the capability of large language models (LLMs) to perform multidimensional analytic assessment of second-language (L2) graduate students’ academic English writing—specifically, whether LLMs can simultaneously generate reliable scores and explanatory feedback across nine predefined criteria. Method: We propose an interpretable, low-cost, scalable, and reproducible automated assessment framework that replaces labor-intensive human scoring. Our approach integrates multi-prompt strategies with state-of-the-art LLMs, a custom-built L2 academic writing corpus, expert-derived multidimensional annotation guidelines, and rule-based enhancement mechanisms. Contribution/Results: Experimental results demonstrate that LLM-generated scores and feedback are overall reasonable, stable, and interpretable. This work constitutes the first systematic validation of LLMs’ reliability and validity in multidimensional writing assessment. To ensure full reproducibility, we publicly release the annotated corpus, annotation specifications, and evaluation code.
📝 Abstract
The paper explores the performance of LLMs in the context of multi-dimensional analytic writing assessments, i.e. their ability to provide both scores and comments based on multiple assessment criteria. Using a corpus of literature reviews written by L2 graduate students and assessed by human experts against 9 analytic criteria, we prompt several popular LLMs to perform the same task under various conditions. To evaluate the quality of feedback comments, we apply a novel feedback comment quality evaluation framework. This framework is interpretable, cost-efficient, scalable, and reproducible, compared to existing methods that rely on manual judgments. We find that LLMs can generate reasonably good and generally reliable multi-dimensional analytic assessments. We release our corpus for reproducibility.