🤖 AI Summary
This work addresses the challenge of reliably detecting reasoning errors in large language models (LLMs) when processing long texts, domain-specific content, or scenarios lacking explicit reward signals. To this end, the authors propose a data-driven approach that automatically constructs a fine-grained taxonomy of reasoning errors—referred to as a “rubric”—and integrates it into reward modeling. This method extends rubric-based evaluation beyond qualitative behavioral analysis to enable quantitative correctness judgments in complex technical tasks. By leveraging this framework, the reliance on expensive gold-standard labels is substantially reduced: using only 20% of such labels in domains like programming, mathematics, and chemical engineering achieves performance comparable to full supervision. Moreover, the approach improves error detection accuracy by 45% over generic LLM-based discriminators.
📝 Abstract
An impediment to using Large Language Models (LLMs) for reasoning output verification is that LLMs struggle to reliably identify errors in thinking traces, particularly in long outputs, domains requiring expert knowledge, and problems without verifiable rewards. We propose a data-driven approach to automatically construct highly granular reasoning error taxonomies to enhance LLM-driven error detection on unseen reasoning traces. Our findings indicate that classification approaches that leverage these error taxonomies, or"rubrics", demonstrate strong error identification compared to baseline methods in technical domains like coding, math, and chemical engineering. These rubrics can be used to build stronger LLM-as-judge reward functions for reasoning model training via reinforcement learning. Experimental results show that these rewards have the potential to improve models'task accuracy on difficult domains over models trained by general LLMs-as-judges by +45%, and approach performance of models trained by verifiable rewards while using as little as 20% as many gold labels. Through our approach, we extend the usage of reward rubrics from assessing qualitative model behavior to assessing quantitative model correctness on tasks typically learned via RLVR rewards. This extension opens the door for teaching models to solve complex technical problems without a full dataset of gold labels, which are often highly costly to procure.