🤖 AI Summary
This work addresses reward hacking and opaque internal objectives in reinforcement learning from human feedback (RLHF), which often arise from spurious correlations in proxy rewards. To tackle these issues, the authors propose IR³, an Interpretable Reward Reconstruction and Rectification framework. IR³ reverse-engineers the implicit reward function by contrasting responses from the aligned policy against those of a baseline policy, then decomposes this reward into interpretable features using sparse autoencoders. By analyzing feature contributions, the framework precisely identifies and surgically corrects reward-hacking features. Experiments demonstrate that the reconstructed reward achieves a correlation of 0.89 with the true reward across diverse reward model configurations, with over 90% accuracy in identifying hacking features. This approach effectively suppresses reward hacking while limiting performance degradation to under 3%. Notably, this study presents the first method enabling interpretable reverse engineering and targeted intervention on the implicit objectives of RLHF models.
📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) enables powerful LLM alignment but can introduce reward hacking - models exploit spurious correlations in proxy rewards without genuine alignment. Compounding this, the objectives internalized during RLHF remain opaque, making hacking behaviors difficult to detect or correct. We introduce IR3 (Interpretable Reward Reconstruction and Rectification), a framework that reverse-engineers, interprets, and surgically repairs the implicit objectives driving RLHF-tuned models. We propose Contrastive Inverse Reinforcement Learning (C-IRL), which reconstructs the implicit reward function by contrasting paired responses from post-alignment and baseline policies to explain behavioral shifts during RLHF. We then decompose the reconstructed reward via sparse autoencoders into interpretable features, enabling identification of hacking signatures through contribution analysis. Finally, we propose mitigation strategies - clean reward optimization, adversarial shaping, constrained optimization, and feature-guided distillation - that target problematic features while preserving beneficial alignment. Experiments across multiple reward model configurations show that IR3 achieves 0.89 correlation with ground-truth rewards, identifies hacking features with over 90% precision, and significantly reduces hacking behaviors while maintaining capabilities within 3% of the original model.