Confusion-Aware Rubric Optimization for LLM-based Automated Grading

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation and logical inconsistencies in existing automated scoring systems caused by conflating heterogeneous error signals during rule optimization. To resolve this, we propose a structured optimization approach based on confusion matrices that decomposes aggregate errors into specific misclassification patterns. This enables targeted generation of repair patches, complemented by a diversity-aware patch selection mechanism integrated with large language models to ensure precise and conflict-free rule updates. By moving beyond conventional aggregated error handling, our method achieves state-of-the-art performance on both teacher education and STEM datasets, demonstrating that pattern-specific repairs significantly enhance scoring accuracy, logical coherence, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Accurate and unambiguous guidelines are critical for large language model (LLM) based graders, yet manually crafting these prompts is often sub-optimal as LLMs can misinterpret expert guidelines or lack necessary domain specificity. Consequently, the field has moved toward automated prompt optimization to refine grading guidelines without the burden of manual trial and error. However, existing frameworks typically aggregate independent and unstructured error samples into a single update step, resulting in "rule dilution" where conflicting constraints weaken the model's grading logic. To address these limitations, we introduce Confusion-Aware Rubric Optimization (CARO), a novel framework that enhances accuracy and computational efficiency by structurally separating error signals. CARO leverages the confusion matrix to decompose monolithic error signals into distinct modes, allowing for the diagnosis and repair of specific misclassification patterns individually. By synthesizing targeted "fixing patches" for dominant error modes and employing a diversity-aware selection mechanism, the framework prevents guidance conflict and eliminates the need for resource-heavy nested refinement loops. Empirical evaluations on teacher education and STEM datasets demonstrate that CARO significantly outperforms existing SOTA methods. These results suggest that replacing mixed-error aggregation with surgical, mode-specific repair yields robust improvements in automated assessment scalability and precision.
Problem

Research questions and friction points this paper is trying to address.

automated grading
prompt optimization
rule dilution
confusion matrix
LLM-based assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confusion-Aware Rubric Optimization
LLM-based automated grading
error mode decomposition
targeted fixing patches
diversity-aware selection
🔎 Similar Papers
No similar papers found.