Measuring What Makes You Unique: Difference-Aware User Modeling for Enhancing LLM Personalization

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM personalization methods rely solely on modeling individual user histories, neglecting cross-user heterogeneity analysis and thus failing to identify key factors driving preference divergence. This paper proposes Difference-Aware Personalized Learning (DPL), the first framework to incorporate cross-user contrastive modeling into LLM personalization. DPL introduces a structured difference extraction criterion and a differentiable representative user selection mechanism to construct task-aware heterogeneous preference contexts. It further integrates contrastive user representation alignment, difference-aware knowledge distillation, and lightweight adapter fine-tuning. Evaluated on multiple real-world datasets, DPL achieves substantial improvements in generation quality (BLEU-4 +2.7) and personalization consistency (+18.3%), marking a paradigm shift from “individual induction” to “contrastive discrimination.” The code is publicly available.

Technology Category

Application Category

📝 Abstract
Personalizing Large Language Models (LLMs) has become a critical step in facilitating their widespread application to enhance individual life experiences. In pursuit of personalization, distilling key preference information from an individual's historical data as instructional preference context to customize LLM generation has emerged as a promising direction. However, these methods face a fundamental limitation by overlooking the inter-user comparative analysis, which is essential for identifying the inter-user differences that truly shape preferences. To address this limitation, we propose Difference-aware Personalization Learning (DPL), a novel approach that emphasizes extracting inter-user differences to enhance LLM personalization. DPL strategically selects representative users for comparison and establishes a structured standard to extract meaningful, task-relevant differences for customizing LLM generation. Extensive experiments on real-world datasets demonstrate that DPL significantly enhances LLM personalization. We release our code at https://github.com/SnowCharmQ/DPL.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLM personalization by identifying inter-user differences.
Extract meaningful, task-relevant differences for customizing LLM generation.
Propose Difference-aware Personalization Learning (DPL) to improve personalization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

DPL emphasizes inter-user difference extraction
Strategic selection of representative users for comparison
Structured standard for task-relevant difference extraction
🔎 Similar Papers
No similar papers found.
Yilun Qiu
Yilun Qiu
National University of Singapore
X
Xiaoyan Zhao
The Chinese University of Hong Kong
Y
Yang Zhang
National University of Singapore
Yimeng Bai
Yimeng Bai
University of Science and Technology of China
RecommendationGenerative RecommendationLarge Language Model
W
Wenjie Wang
University of Science and Technology of China
Hong Cheng
Hong Cheng
Professor, The Chinese University of Hong Kong
Data MiningDatabaseMachine Learning
F
Fuli Feng
University of Science and Technology of China
T
Tat-Seng Chua
National University of Singapore