π€ AI Summary
This paper addresses the interpretability challenge of human value trade-offs in large language models (LLMs). It introduces, for the first time, a utility-weighting model from cognitive science into LLM analysis, establishing a theoretical framework that quantifies the tension between informational and social utility. Methodologically, it integrates politeness theory to conduct systematic attribution analysis across pretraining and post-training stages of state-of-the-art reasoning models and open-source models. Key contributions are threefold: (1) It demonstrates that LLMs exhibit human-like value trade-off mechanisms, yet reasoning models exhibit a pronounced bias toward informational utility; (2) It reveals that value orientation is predominantly shaped by the base model architecture and pretraining dataβfar more than by feedback data or alignment techniques; (3) It confirms that critical utility shifts occur early in training, thereby establishing the foundation for long-term value preferences.
π Abstract
Navigating everyday social situations often requires juggling conflicting goals, such as conveying a harsh truth, maintaining trust, all while still being mindful of another person's feelings. These value trade-offs are an integral part of human decision-making and language use, however, current tools for interpreting such dynamic and multi-faceted notions of values in LLMs are limited. In cognitive science, so-called "cognitive models" provide formal accounts of these trade-offs in humans, by modeling the weighting of a speaker's competing utility functions in choosing an action or utterance. In this work, we use a leading cognitive model of polite speech to interpret the extent to which LLMs represent human-like trade-offs. We apply this lens to systematically evaluate value trade-offs in two encompassing model settings: degrees of reasoning "effort" in frontier black-box models, and RL post-training dynamics of open-source models. Our results highlight patterns of higher informational utility than social utility in reasoning models, and in open-source models shown to be stronger in mathematical reasoning. Our findings from LLMs' training dynamics suggest large shifts in utility values early on in training with persistent effects of the choice of base model and pretraining data, compared to feedback dataset or alignment method. We show that our method is responsive to diverse aspects of the rapidly evolving LLM landscape, with insights for forming hypotheses about other high-level behaviors, shaping training regimes for reasoning models, and better controlling trade-offs between values during model training.