Optimizing In-Context Demonstrations for LLM-based Automated Grading

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models often struggle with reliable automated scoring due to difficulties in accurately capturing the boundaries of scoring rubrics through example selection and rationale generation. This work proposes GUIDE, a novel framework that formulates example optimization as an iterative process focused explicitly on scoring boundaries. GUIDE employs contrastive operators to identify “boundary pairs”—semantically similar responses assigned different scores—and automatically generates discriminative scoring rationales that exclude adjacent score levels, thereby enhancing the model’s understanding and adherence to fine-grained rubric criteria. Experimental results demonstrate that GUIDE significantly outperforms conventional retrieval-based baselines across physics, chemistry, and instructional content datasets, exhibiting particular robustness on boundary cases and producing scores more aligned with human educational standards.

Technology Category

Application Category

📝 Abstract
Automated assessment of open-ended student responses is a critical capability for scaling personalized feedback in education. While large language models (LLMs) have shown promise in grading tasks via in-context learning (ICL), their reliability is heavily dependent on the selection of few-shot exemplars and the construction of high-quality rationales. Standard retrieval methods typically select examples based on semantic similarity, which often fails to capture subtle decision boundaries required for rubric adherence. Furthermore, manually crafting the expert rationales needed to guide these models can be a significant bottleneck. To address these limitations, we introduce GUIDE (Grading Using Iteratively Designed Exemplars), a framework that reframes exemplar selection and refinement in automated grading as a boundary-focused optimization problem. GUIDE operates on a continuous loop of selection and refinement, employing novel contrastive operators to identify "boundary pairs" that are semantically similar but possess different grades. We enhance exemplars by generating discriminative rationales that explicitly articulate why a response receives a specific score to the exclusion of adjacent grades. Extensive experiments across datasets in physics, chemistry, and pedagogical content knowledge demonstrate that GUIDE significantly outperforms standard retrieval baselines. By focusing the model's attention on the precise edges of rubric, our approach shows exceptionally robust gains on borderline cases and improved rubric adherence. GUIDE paves the way for trusted, scalable assessment systems that align closely with human pedagogical standards.
Problem

Research questions and friction points this paper is trying to address.

automated grading
in-context learning
exemplar selection
rubric adherence
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
automated grading
boundary pairs
discriminative rationales
contrastive operators
🔎 Similar Papers
No similar papers found.