π€ AI Summary
This work addresses the susceptibility of current large language models (LLMs) to surface-quality bias in judgment tasks and provides the first systematic evaluation of emerging large reasoning models (LRMs) as evaluators. The study demonstrates that LRMs significantly outperform standard LLMs in reasoning-intensive judgment tasks, exhibiting higher accuracy, stronger instruction adherence, and greater adversarial robustness. To further enhance judgment quality, the authors propose PlanJudgeβa simple yet effective strategy that prompts models to first generate an explicit evaluation plan before rendering a judgment. This approach substantially mitigates surface-quality bias in both LRMs and standard LLMs, leading to more accurate and fair assessments.
π Abstract
This paper presents the first systematic comparison investigating whether Large Reasoning Models (LRMs) are superior judge to non-reasoning LLMs. Our empirical analysis yields four key findings: 1) LRMs outperform non-reasoning LLMs in terms of judgment accuracy, particularly on reasoning-intensive tasks; 2) LRMs demonstrate superior instruction-following capabilities in evaluation contexts; 3) LRMs exhibit enhanced robustness against adversarial attacks targeting judgment tasks; 4) However, LRMs still exhibit strong biases in superficial quality. To improve the robustness against biases, we propose PlanJudge, an evaluation strategy that prompts the model to generate an explicit evaluation plan before execution. Despite its simplicity, our experiments demonstrate that PlanJudge significantly mitigates biases in both LRMs and standard LLMs.