π€ AI Summary
Traditional reward modeling struggles to capture diverse and complex human preferences. To address this, we propose Decomposed Reward Modeling (DRM), a framework that learns orthogonal preference dimensions solely from binary response comparison dataβwithout requiring fine-grained annotations. DRM is the first to introduce Principal Component Analysis (PCA) into preference modeling, disentangling human preferences into semantically clear, interpretable, and orthogonal vector bases (e.g., helpfulness, safety, humor). By modeling reward signals via response embedding differences and decomposed reward learning, DRM enables flexible composition of preference dimensions and zero-shot user adaptation. Experiments demonstrate that DRM achieves cross-user personalized alignment without user-specific training, significantly enhancing the interpretability, adaptability, and preference modeling capability of large language models.
π Abstract
Understanding human preferences is crucial for improving foundation models and building personalized AI systems. However, preferences are inherently diverse and complex, making it difficult for traditional reward models to capture their full range. While fine-grained preference data can help, collecting it is expensive and hard to scale. In this paper, we introduce Decomposed Reward Models (DRMs), a novel approach that extracts diverse human preferences from binary comparisons without requiring fine-grained annotations. Our key insight is to represent human preferences as vectors and analyze them using Principal Component Analysis (PCA). By constructing a dataset of embedding differences between preferred and rejected responses, DRMs identify orthogonal basis vectors that capture distinct aspects of preference. These decomposed rewards can be flexibly combined to align with different user needs, offering an interpretable and scalable alternative to traditional reward models. We demonstrate that DRMs effectively extract meaningful preference dimensions (e.g., helpfulness, safety, humor) and adapt to new users without additional training. Our results highlight DRMs as a powerful framework for personalized and interpretable LLM alignment.