SteerRM: Debiasing Reward Models via Sparse Autoencoders

πŸ“… 2026-03-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Reward models are susceptible to superficial stylistic cues, often favoring responses with preferred formatting over those with superior semantic content, thereby introducing alignment bias. This work proposes an inference-time intervention that requires neither retraining nor architectural modification: leveraging sparse autoencoders (SAEs) to identify and suppress features associated with such stylistic biases. The study reveals that these bias-related features are concentrated in the shallow layers of the model and exhibit cross-architectural transferability, suggesting a common underlying encoding pattern for stylistic bias. Evaluated on RM-Bench, the method improves Hard-split accuracy by an average of 7.3 points while preserving overall performance. Furthermore, it demonstrates strong generalization across diverse settings, including Gemma-based reward models and scenarios involving non-stylistic biases.

Technology Category

Application Category

πŸ“ Abstract
Reward models (RMs) are critical components of alignment pipelines, yet they exhibit biases toward superficial stylistic cues, preferring better-presented responses over semantically superior ones. Existing debiasing methods typically require retraining or architectural modifications, while direct activation suppression degrades performance due to representation entanglement. We propose SteerRM, the first training-free method for debiasing reward models using Sparse Autoencoder (SAE)-based interventions. SteerRM isolates stylistic effects using contrastive paired responses, identifies bias-related SAE features with a strength-stability criterion, and suppresses them at inference time. Across six reward models on RM-Bench, SteerRM improves Hard-split accuracy by 7.3 points on average while preserving overall performance. Results on a Gemma-based reward model and a controlled non-format bias further suggest generalization across RM architectures and bias types. We further find that format-related features are concentrated in shallow layers and transfer across models, revealing shared architecture-level bias encoding patterns. These results show that SAE-based interventions can mitigate reward-model biases without retraining, providing a practical and interpretable solution for alignment pipelines.
Problem

Research questions and friction points this paper is trying to address.

reward models
bias
debiasing
alignment
stylistic cues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Autoencoder
Reward Model Debiasing
Training-Free Intervention
Feature Suppression
Alignment
πŸ”Ž Similar Papers
No similar papers found.
M
Mengyuan Sun
National Engineering Research Center for Software Engineering, Peking University
Zhuohao Yu
Zhuohao Yu
Peking University
Natural Language ProcessingSoftware Engineering
W
Weizheng Gu
National Engineering Research Center for Software Engineering, Peking University
Shikun Zhang
Shikun Zhang
εŒ—δΊ¬ε€§ε­¦
Wei Ye
Wei Ye
Peking University
Software EngineeringNatural Language Processing