🤖 AI Summary
In external reasoning systems, process reward models (PRMs) are susceptible to semantic confounding features, leading to “reward hacking”—assigning high scores to logically flawed yet superficially plausible reasoning traces, thereby degrading mathematical problem-solving accuracy. This work is the first to formalize the issue through a causal inference lens and proposes Causal Reward Adjustment (CRA): it employs sparse autoencoders to disentangle PRM hidden-layer activations, identifies interpretable confounding features, and applies unbiased reward correction via the backdoor adjustment formula. CRA requires no modification to the policy model or retraining of the PRM, ensuring strong interpretability and plug-and-play deployment. Evaluated across multiple mathematical reasoning benchmarks, CRA significantly mitigates reward hacking and improves final answer accuracy, demonstrating both effectiveness and cross-task generalization.
📝 Abstract
External reasoning systems combine language models with process reward models (PRMs) to select high-quality reasoning paths for complex tasks such as mathematical problem solving. However, these systems are prone to reward hacking, where high-scoring but logically incorrect paths are assigned high scores by the PRMs, leading to incorrect answers. From a causal inference perspective, we attribute this phenomenon primarily to the presence of confounding semantic features. To address it, we propose Causal Reward Adjustment (CRA), a method that mitigates reward hacking by estimating the true reward of a reasoning path. CRA trains sparse autoencoders on the PRM's internal activations to recover interpretable features, then corrects confounding by using backdoor adjustment. Experiments on math solving datasets demonstrate that CRA mitigates reward hacking and improves final accuracy, without modifying the policy model or retraining PRM.