🤖 AI Summary
This work addresses the limitation of existing personalized question answering methods, which rely on direct retrieval of user profiles and struggle to deeply integrate user background and preferences. To overcome this, we propose PR2, a novel framework that introduces reinforcement learning into personalized QA for the first time. PR2 jointly optimizes a multi-step retrieval and reasoning process, dynamically determining when and what evidence to retrieve from the user profile and incorporating it into intermediate reasoning steps. This enables adaptive acquisition of personalized context and selection of reasoning paths. By integrating retrieval-augmented generation (RAG), large language models (LLMs), and a personalized reward function, PR2 achieves consistent improvements over strong baselines—yielding average gains of 8.8%–12% in personalized QA performance across three LLMs on the LaMP-QA benchmark.
📝 Abstract
Personalization in Question Answering (QA) requires answers that are both accurate and aligned with users' background, preferences, and historical context. Existing state-of-the-art methods primarily rely on retrieval-augmented generation (RAG) solutions that construct personal context by retrieving relevant items from the user's profile. Existing methods use the user's query directly to retrieve personal documents, and such strategies often lead to surface-level personalization. We propose PR2 (Personalized Retrieval-Augmented Reasoning), a reinforcement learning framework that integrates reasoning and retrieval from personal context for personalization. PR2 learns adaptive retrieval-reasoning policies, determining when to retrieve, what evidence to retrieve from user profiles, and how to incorporate it into intermediate reasoning steps. By optimizing multi-turn reasoning trajectories under a personalized reward function, the framework reinforces reasoning paths that better align with user-specific preferences and contextual signals reflected by the reward model. Extensive experiments on the LaMP-QA benchmark using three LLMs show that PR2 consistently outperforms strong baselines, achieving an average relative improvement of 8.8%-12% in personalized QA.