🤖 AI Summary
Existing direct preference optimization (DPO) methods neglect token-level importance disparities and are sensitive to noise in preference annotations. To address these limitations, we propose Gradient-Triplet Preference Optimization (GTPO). First, GTPO introduces a novel gradient-based mechanism that dynamically computes token importance weights via backpropagated gradients, enabling fine-grained, adaptive token weighting. Second, it employs a triplet contrastive loss that explicitly pulls preferred responses closer, pushes dispreferred responses farther, and regularizes outputs against the reference model. GTPO integrates gradient analysis, importance-aware weighting, and triplet learning within the DPO framework—without requiring reinforcement learning modules. Experiments across multiple benchmarks demonstrate that GTPO significantly outperforms DPO and state-of-the-art RLHF methods, achieving notable improvements in alignment accuracy, response diversity, and training stability, while reducing computational overhead.
📝 Abstract
Ensuring that large language models (LLMs) generate outputs aligned with human preferences is important for safe and effective AI interactions. While Direct Preference Optimization (DPO) employs an implicit reward function to optimize the policy model, however, it and its related variants overlook the differential importance of individual tokens and are sensitive to judgment noise in preference datasets during generation. Although recent methods attempt to assess the important weight of tokens via probability prediction or simplistic weighting schemes, these evaluation methods are prone to biases and still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), which introduces two key innovations: the gradient-based token-importance weights that dynamically prioritize critical tokens, and a triple loss that explicitly guides model outputs to approach human-preferred responses and stay away from non-preferred responses. Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions compared with DPO and other RLHF methods.