Rubric Is All You Need: Enhancing LLM-based Code Evaluation With Question-Specific Rubrics

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the weak logical reasoning and insufficient semantic alignment—despite strong generalizability—in LLM-based code assessment for educational settings. We propose a problem-customized multi-agent evaluation framework. Our key contributions are: (1) the first problem-aware method for automatic generation of fine-grained, context-specific scoring rubrics; (2) a novel metric, “Leniency”, quantifying evaluator permissiveness to enhance interpretability and controllability of strictness; and (3) the first real-world, education-oriented benchmark comprising 150+ data structures & algorithms (DSA) and 80+ object-oriented programming (OOP) student submissions. Experiments show our approach improves Spearman correlation with human judgments on logical correctness by 27.4% over baselines and reduces deviation from expert strictness ratings by 41%. The framework significantly enhances alignment with pedagogical objectives and feedback accuracy.

Technology Category

Application Category

📝 Abstract
Since the disruption in LLM technology brought about by the release of GPT-3 and ChatGPT, LLMs have shown remarkable promise in programming-related tasks. While code generation remains a popular field of research, code evaluation using LLMs remains a problem with no conclusive solution. In this paper, we focus on LLM-based code evaluation and attempt to fill in the existing gaps. We propose multi-agentic novel approaches using question-specific rubrics tailored to the problem statement, arguing that these perform better for logical assessment than the existing approaches that use question-agnostic rubrics. To address the lack of suitable evaluation datasets, we introduce two datasets: a Data Structures and Algorithms dataset containing 150 student submissions from a popular Data Structures and Algorithms practice website, and an Object Oriented Programming dataset comprising 80 student submissions from undergraduate computer science courses. In addition to using standard metrics (Spearman Correlation, Cohen's Kappa), we additionally propose a new metric called as Leniency, which quantifies evaluation strictness relative to expert assessment. Our comprehensive analysis demonstrates that question-specific rubrics significantly enhance logical assessment of code in educational settings, providing better feedback aligned with instructional goals beyond mere syntactic correctness.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM-based code evaluation with question-specific rubrics
Addressing lack of suitable datasets for code evaluation
Improving logical assessment in educational settings beyond syntax
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses question-specific rubrics for logical assessment
Introduces two new code evaluation datasets
Proposes Leniency metric for evaluation strictness
A
Aditya Pathak
BITS Pilani, India
R
Rachit Gandhi
BITS Pilani, India
V
Vaibhav Uttam
BITS Pilani, India
D
Devansh
BITS Pilani, India
Yashwanth Nakka
Yashwanth Nakka
Georgia Institute of Technology
Space RoboticsAutonomyNonlinear ControlMotion Planning Under Uncertainty
A
Aaryan Raj Jindal
BITS Pilani, India
P
Pratyush Ghosh
BITS Pilani, India
A
Arnav Ramamoorthy
BITS Pilani, India
S
Shreyash Verma
BITS Pilani, India
Aditya Mittal
Aditya Mittal
Professor of Biological Sciences, Indian Institute of Technology Delhi
BiologyBiophysicsBiochemistry
A
Aashna Ased
BITS Pilani, India
C
Chirag Khatri
BITS Pilani, India
Jagat Sesh Challa
Jagat Sesh Challa
Assistant Professor, Department of Computer Science & Information Systems, BITS Pilani
Big Data AnalyticsComputer VisionFederated LearningMaterials InformaticsHCI
D
Dhruv Kumar
BITS Pilani, India