π€ AI Summary
This work addresses the challenge that large language models (LLMs) struggle to effectively detect diverse reward-hacking behaviors when employed as reward evaluators in code-generation reinforcement learning, due to the absence of a systematic evaluation benchmark. The authors propose the first fine-grained taxonomy of reward hacking in code environments, encompassing 54 vulnerability types, and introduce TRACEβthe first synthetic, human-validated, contrastive detection benchmark comprising 517 execution trajectories. Through comparative experiments between isolated classification and contrastive anomaly detection settings, they demonstrate that the latter substantially improves detection performance: GPT-5.2 achieves a 63% detection rate under the contrastive setting, an 18% improvement over isolated classification. Furthermore, the study reveals that LLMs exhibit significantly weaker capability in identifying semantic reward hacks compared to syntactic ones.
π Abstract
Recent advances in reinforcement learning for code generation have made robust environments essential to prevent reward hacking. As LLMs increasingly serve as evaluators in code-based RL, their ability to detect reward hacking remains understudied. In this paper, we propose a novel taxonomy of reward exploits spanning across 54 categories and introduce TRACE (Testing Reward Anomalies in Code Environments), a synthetically curated and human-verified benchmark containing 517 testing trajectories. Unlike prior work that evaluates reward hack detection in isolated classification scenarios, we contrast these evaluations with a more realistic, contrastive anomaly detection setup on TRACE. Our experiments reveal that models capture reward hacks more effectively in contrastive settings than in isolated classification settings, with GPT-5.2 with highest reasoning mode achieving the best detection rate at 63%, up from 45% in isolated settings on TRACE. Building on this insight, we demonstrate that state-of-the-art models struggle significantly more with semantically contextualized reward hacks compared to syntactically contextualized ones. We further conduct qualitative analyses of model behaviors, as well as ablation studies showing that the ratio of benign to hacked trajectories and analysis cluster sizes substantially impact detection performance. We release the benchmark and evaluation harness to enable the community to expand TRACE and evaluate their models.