🤖 AI Summary
This work investigates systematic discrepancies between large language models (LLMs) and human developers when LLMs serve as code evaluators, particularly in realistic interaction scenarios involving ambiguous intent and partial context. To address this, the authors propose TRACE—a framework that integrates rule-based and learning-based approaches to automatically extract evaluation dimensions and quantify alignment gaps between LLM and human judgments across three modalities: chat-based programming, IDE code completion, and instruction-driven code editing. Through comparative analysis of human annotations and model outputs, the study identifies 35 significant sources of bias in multi-modal coding contexts, most of which map to established software engineering principles. Experimental results reveal that even the best-performing LLM evaluators lag behind human accuracy by 12–23% and exhibit notable inconsistencies across quality dimensions—such as favoring verbose explanations over conciseness in chat-based programming.
📝 Abstract
As LLMs are increasingly used as judges in code applications, they should be evaluated in realistic interactive settings that capture partial context and ambiguous intent. We present TRACE (Tool for Rubric Analysis in Code Evaluation), a framework that evaluates LLM judges' ability to predict human preferences and automatically extracts rubric items to reveal systematic biases in how humans and models weigh each item. Across three modalities -- chat-based programming, IDE autocompletion, and instructed code editing -- we use TRACE to measure how well LLM judges align with developer preferences. Among 13 different models, the best judges underperform human annotators by 12-23%. TRACE identifies 35 significant sources of misalignment between humans and judges across interaction modalities, the majority of which correspond to existing software engineering code quality criteria. For example, in chat-based coding, judges are biased towards longer code explanations while humans prefer shorter ones. We find significant misalignment on the majority of existing code quality dimensions, showing alignment gaps between LLM judges and human preference in realistic coding applications.