PrLM: Learning Explicit Reasoning for Personalized RAG via Contrastive Reward Optimization

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing personalized RAG approaches rely on large language models (LLMs) to implicitly fuse user profiles with queries, making generated outputs highly sensitive to retrieval quality and prone to deviation from user preferences. This paper proposes an explicit user profile reasoning framework that models user preferences as interpretable intermediate representations and introduces a contrastive reward model—trained without human annotations—to optimize generation via reinforcement learning. Our core contributions are: (1) decoupling retrieval from reasoning to enforce explicit modeling of user-specific features; and (2) constructing fine-grained contrastive rewards grounded in multi-source user profiles to enhance preference alignment. Extensive experiments across three benchmark datasets demonstrate significant improvements over state-of-the-art methods, achieving +12.7% BLEU-4 and +9.3% ROUGE-L gains in personalized generation quality. Moreover, our method exhibits strong robustness under varying numbers of retrieved documents and across different retrievers.

Technology Category

Application Category

📝 Abstract
Personalized retrieval-augmented generation (RAG) aims to produce user-tailored responses by incorporating retrieved user profiles alongside the input query. Existing methods primarily focus on improving retrieval and rely on large language models (LLMs) to implicitly integrate the retrieved context with the query. However, such models are often sensitive to retrieval quality and may generate responses that are misaligned with user preferences. To address this limitation, we propose PrLM, a reinforcement learning framework that trains LLMs to explicitly reason over retrieved user profiles. Guided by a contrastively trained personalization reward model, PrLM effectively learns from user responses without requiring annotated reasoning paths. Experiments on three personalized text generation datasets show that PrLM outperforms existing methods and remains robust across varying numbers of retrieved profiles and different retrievers.
Problem

Research questions and friction points this paper is trying to address.

Improving personalized RAG by explicit reasoning over user profiles
Reducing sensitivity to retrieval quality in personalized response generation
Aligning generated responses with user preferences without annotated data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for explicit reasoning
Contrastive personalization reward model
Robust across varying retrieval conditions
🔎 Similar Papers
No similar papers found.
Kepu Zhang
Kepu Zhang
Renmin University of China
SearchLLMRecommendationLegal AI
Teng Shi
Teng Shi
Renmin University of China
Recommender SystemInformation Retrieval
W
Weijie Yu
School of Information Technology and Management, University of International Business and Economics
J
Jun Xu
Gaoling School of Artificial Intelligence, Renmin University of China