🤖 AI Summary
Existing LLM alignment methods—such as RLHF and DPO—rely on sparse, response-level rewards, ignoring fine-grained token-level quality variations. This leads to erroneous penalization of high-quality tokens or amplification of low-quality ones, causing optimization bias and slow convergence. To address this, we propose a token-level reward distillation framework for alignment. First, we establish the theoretical equivalence between DPO and RLHF objectives. Next, we design a contrastive DPO reward mechanism coupled with adaptive logit extrapolation to construct a fine-grained, dynamic token-level teacher distribution. Finally, we unify DPO, policy distillation, contrastive learning, and adaptive logit scaling into a novel token-level distribution distillation objective. Experiments across multiple benchmarks demonstrate that our method significantly outperforms both RLHF and DPO in alignment quality and training efficiency, achieves faster convergence, and effectively mitigates token-level optimization distortion.
📝 Abstract
In modern large language models (LLMs), LLM alignment is of crucial importance and is typically achieved through methods such as reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO). However, in most existing methods for LLM alignment, all tokens in the response are optimized using a sparse, response-level reward or preference annotation. The ignorance of token-level rewards may erroneously punish high-quality tokens or encourage low-quality tokens, resulting in suboptimal performance and slow convergence speed. To address this issue, we propose AlignDistil, an RLHF-equivalent distillation method for token-level reward optimization. Specifically, we introduce the reward learned by DPO into the RLHF objective and theoretically prove the equivalence between this objective and a token-level distillation process, where the teacher distribution linearly combines the logits from the DPO model and a reference model. On this basis, we further bridge the accuracy gap between the reward from the DPO model and the pure reward model, by building a contrastive DPO reward with a normal and a reverse DPO model. Moreover, to avoid under- and over-optimization on different tokens, we design a token adaptive logit extrapolation mechanism to construct an appropriate teacher distribution for each token. Experimental results demonstrate the superiority of our AlignDistil over existing methods and showcase fast convergence due to its token-level distributional reward optimization.