Beyond Holistic Scores: Automatic Trait-Based Quality Scoring of Argumentative Essays

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel approach to automated essay scoring that addresses the limited interpretability of existing systems, which typically provide only holistic scores without dimension-specific feedback essential for pedagogical support. The method integrates a small, open-source large language model (LLM) enhanced with structured in-context learning and a CORAL ordinal regression model built upon a BigBird encoder. It is the first to jointly incorporate semantic information from educational rubrics and explicit ordinal modeling in argumentative essay assessment. By explicitly capturing the ordinal nature of scores, the approach significantly improves agreement with human raters across five quality dimensions, outperforming conventional classification or regression baselines as well as larger LLMs. Notably, the small LLM—without fine-tuning—demonstrates strong performance on reasoning-related dimensions, highlighting its viability for localized, transparent, and explainable automated scoring.

Technology Category

Application Category

📝 Abstract
Automated Essay Scoring systems have traditionally focused on holistic scores, limiting their pedagogical usefulness, especially in the case of complex essay genres such as argumentative writing. In educational contexts, teachers and learners require interpretable, trait-level feedback that aligns with instructional goals and established rubrics. In this paper, we study trait-based Automatic Argumentative Essay Scoring using two complementary modeling paradigms designed for realistic educational deployment: (1) structured in-context learning with small open-source LLMs, and (2) a supervised, encoder-based BigBird model with a CORAL-style ordinal regression formulation, optimized for long-sequence understanding. We conduct a systematic evaluation on the ASAP++ dataset, which includes essay scores across five quality traits, offering strong coverage of core argumentation dimensions. LLMs are prompted with designed, rubric-aligned in-context examples, along with feedback and confidence requests, while we explicitly model ordinality in scores with the BigBird model via the rank-consistent CORAL framework. Our results show that explicitly modeling score ordinality substantially improves agreement with human raters across all traits, outperforming LLMs and nominal classification and regression-based baselines. This finding reinforces the importance of aligning model objectives with rubric semantics for educational assessment. At the same time, small open-source LLMs achieve a competitive performance without task-specific fine-tuning, particularly for reasoning-oriented traits, while enabling transparent, privacy-preserving, and locally deployable assessment scenarios. Our findings provide methodological, modeling, and practical insights for the design of AI-based educational systems that aim to deliver interpretable, rubric-aligned feedback for argumentative writing.
Problem

Research questions and friction points this paper is trying to address.

Automated Essay Scoring
trait-based scoring
argumentative writing
educational assessment
rubric-aligned feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

trait-based scoring
ordinal regression
in-context learning
BigBird
interpretable feedback
🔎 Similar Papers
No similar papers found.