Token-Importance Guided Direct Preference Optimization

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing direct preference optimization (DPO) methods neglect token-level importance disparities and are sensitive to noise in preference annotations. To address these limitations, we propose Gradient-Triplet Preference Optimization (GTPO). First, GTPO introduces a novel gradient-based mechanism that dynamically computes token importance weights via backpropagated gradients, enabling fine-grained, adaptive token weighting. Second, it employs a triplet contrastive loss that explicitly pulls preferred responses closer, pushes dispreferred responses farther, and regularizes outputs against the reference model. GTPO integrates gradient analysis, importance-aware weighting, and triplet learning within the DPO framework—without requiring reinforcement learning modules. Experiments across multiple benchmarks demonstrate that GTPO significantly outperforms DPO and state-of-the-art RLHF methods, achieving notable improvements in alignment accuracy, response diversity, and training stability, while reducing computational overhead.

Technology Category

Application Category

📝 Abstract
Ensuring that large language models (LLMs) generate outputs aligned with human preferences is important for safe and effective AI interactions. While Direct Preference Optimization (DPO) employs an implicit reward function to optimize the policy model, however, it and its related variants overlook the differential importance of individual tokens and are sensitive to judgment noise in preference datasets during generation. Although recent methods attempt to assess the important weight of tokens via probability prediction or simplistic weighting schemes, these evaluation methods are prone to biases and still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), which introduces two key innovations: the gradient-based token-importance weights that dynamically prioritize critical tokens, and a triple loss that explicitly guides model outputs to approach human-preferred responses and stay away from non-preferred responses. Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions compared with DPO and other RLHF methods.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLM outputs with human preferences for safe AI interactions
Addressing sensitivity to noise in preference datasets during generation
Improving token importance assessment to reduce biases in optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based token-importance weights prioritize critical tokens
Triple loss guides model to human-preferred responses
Dynamic weighting enhances accuracy and generative diversity
🔎 Similar Papers
No similar papers found.