🤖 AI Summary
Existing LLM personalization methods rely solely on modeling individual user histories, neglecting cross-user heterogeneity analysis and thus failing to identify key factors driving preference divergence. This paper proposes Difference-Aware Personalized Learning (DPL), the first framework to incorporate cross-user contrastive modeling into LLM personalization. DPL introduces a structured difference extraction criterion and a differentiable representative user selection mechanism to construct task-aware heterogeneous preference contexts. It further integrates contrastive user representation alignment, difference-aware knowledge distillation, and lightweight adapter fine-tuning. Evaluated on multiple real-world datasets, DPL achieves substantial improvements in generation quality (BLEU-4 +2.7) and personalization consistency (+18.3%), marking a paradigm shift from “individual induction” to “contrastive discrimination.” The code is publicly available.
📝 Abstract
Personalizing Large Language Models (LLMs) has become a critical step in facilitating their widespread application to enhance individual life experiences. In pursuit of personalization, distilling key preference information from an individual's historical data as instructional preference context to customize LLM generation has emerged as a promising direction. However, these methods face a fundamental limitation by overlooking the inter-user comparative analysis, which is essential for identifying the inter-user differences that truly shape preferences. To address this limitation, we propose Difference-aware Personalization Learning (DPL), a novel approach that emphasizes extracting inter-user differences to enhance LLM personalization. DPL strategically selects representative users for comparison and establishes a structured standard to extract meaningful, task-relevant differences for customizing LLM generation. Extensive experiments on real-world datasets demonstrate that DPL significantly enhances LLM personalization. We release our code at https://github.com/SnowCharmQ/DPL.