π€ AI Summary
Large language models (LLMs) rely on costly human preference data to train reward models for alignment, hindering scalability and efficiency.
Method: We propose an endogenous reward extraction method that requires no additional training, fine-tuning, human annotation, or AI feedback. Grounded in theoretical analysis, we show that the implicit reward function encoded in pretrained autoregressive LMs is equivalent to the solution of offline inverse reinforcement learning (IRL) and admits a tighter policy error bound. Our approach directly decodes this intrinsic reward signal via principled, analytical derivation from standard LM outputs.
Results: Empirical evaluation across multiple alignment benchmarks demonstrates that our method outperforms mainstream LLM-as-a-judge approaches and matches or exceeds the performance of explicitly trained reward modelsβwhile eliminating dependence on preference data. This yields substantial gains in alignment efficiency, scalability, and theoretical rigor.
π Abstract
The alignment of Large Language Models (LLMs) is critically dependent on reward models trained on costly human preference data. While recent work explores bypassing this cost with AI feedback, these methods often lack a rigorous theoretical foundation. In this paper, we discover that a powerful generalist reward model is already latently present within any LLM trained via standard next-token prediction. We prove that this endogenous reward is not a heuristic, but is theoretically equivalent to a reward function learned through offline inverse reinforcement learning. This connection allows us to directly elicit a high-quality reward signal from a base (pre-trained or supervised fine-tuned) model without any further training. Critically, we also prove that subsequent reinforcement learning using this endogenous reward leads to a policy with a provably superior error bound compared to the base model. To our best knowledge, this is the first theoretical proof of the effectiveness of reinforcement learning for LLMs. Our experiments validate this theory, demonstrating that our method not only outperforms existing LLM-as-a-judge approaches but can also surpass explicitly trained reward models. These findings suggest that the reward modeling stage can be replaced by a principled method of eliciting the knowledge already captured during pre-training, heralding a more efficient, powerful, and scalable paradigm for LLMs alignment as well as multi-modal models.