🤖 AI Summary
This paper investigates the underlying normative preference structure latent in human binary preference ratings. Method: We propose the “Human Preference Canonical Basis” hypothesis—that cross-population and cross-topic preference variation can be efficiently represented by a sparse, low-dimensional (21-dimensional), orthogonal basis. Our approach integrates principal component analysis, low-rank approximation, and validation on both synthetic and empirical datasets. Contribution/Results: Across multiple large-scale preference datasets, the canonical basis explains over 89% of individual preference variance, demonstrating strong generalizability. This work is the first to systematically establish the existence of a transferable, interpretable, low-dimensional canonical basis for human preferences—paralleling foundational representational principles in cognitive science. Moreover, the basis enables downstream alignment diagnostics and category-aware fine-tuning, significantly improving the precision and interpretability of behavioral control in large language models.
📝 Abstract
Recent advances in generative AI have been driven by alignment techniques such as reinforcement learning from human feedback (RLHF). RLHF and related techniques typically involve constructing a dataset of binary or ranked choice human preferences and subsequently fine-tuning models to align with these preferences. This paper shifts the focus to understanding the preferences encoded in such datasets and identifying common human preferences. We find that a small subset of 21 preference categories (selected from a set of nearly 5,000 distinct preferences) captures>89% of preference variation across individuals. This small set of preferences is analogous to a canonical basis of human preferences, similar to established findings that characterize human variation in psychology or facial recognition studies. Through both synthetic and empirical evaluations, we confirm that our low-rank, canonical set of human preferences generalizes across the entire dataset and within specific topics. We further demonstrate our preference basis' utility in model evaluation, where our preference categories offer deeper insights into model alignment, and in model training, where we show that fine-tuning on preference-defined subsets successfully aligns the model accordingly.