🤖 AI Summary
This work addresses the challenge of reward signal opacity in large language model alignment, which often leads to goal misalignment and reward hacking due to complex, inscrutable objectives. To uncover interpretable yet implicit alignment targets without relying on predefined rules, the authors propose Obj-Disco—a novel framework that automatically discovers comprehensive, causally influential alignment objectives. By integrating iterative greedy search, behavioral residual analysis, and multi-checkpoint model tracing, Obj-Disco generates and validates sparse, weighted combinations of natural language objectives. Experiments demonstrate that the framework consistently captures over 90% of reward-driven behavior across diverse tasks, model scales, and alignment algorithms. Human evaluations further confirm its ability to identify latent misaligned incentives in open-source reward models, effectively revealing “unknown unknowns” in current alignment practices.
📝 Abstract
Large language model (LLM) alignment relies on complex reward signals that often obscure the specific behaviors being incentivized, creating critical risks of misalignment and reward hacking. Existing interpretation methods typically rely on pre-defined rubrics, risking the omission of "unknown unknowns", or fail to identify objectives that comprehensively cover and are causal to the model behavior. To address these limitations, we introduce Obj-Disco, a framework that automatically decomposes an alignment reward signal into a sparse, weighted combination of human-interpretable natural language objectives. Our approach utilizes an iterative greedy algorithm to analyze behavioral changes across training checkpoints, identifying and validating candidate objectives that best explain the residual reward signal. Extensive evaluations across diverse tasks, model sizes, and alignment algorithms demonstrate the framework's robustness. Experiments with popular open-source reward models show that the framework consistently captures > 90% of reward behavior, a finding further corroborated by human evaluation. Additionally, a case study on alignment with an open-source reward model reveals that Obj-Disco can successfully identify latent misaligned incentives that emerge alongside intended behaviors. Our work provides a crucial tool for uncovering the implicit objectives in LLM alignment, paving the way for more transparent and safer AI development.