User Preference Modeling for Conversational LLM Agents: Weak Rewards from Retrieval-Augmented Interaction

πŸ“… 2026-03-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of persistent user modeling in large language models (LLMs) employed as personal assistants, which often forces users to repeatedly restate their preferences across sessions. To overcome this limitation, the authors propose the VARS framework, which enables efficient and interpretable personalized interactions without fine-tuning the frozen LLM backbone. VARS achieves this through online updates of long- and short-term preference vectors, a structured memory store, and an adaptive retrieval mechanism driven by weak reward signals. Experimental results on the MultiSessionCollab benchmark demonstrate that VARS significantly reduces task timeout rates and user effort while achieving task success rates comparable to strong baselines. Furthermore, the learned preference vectors exhibit high interpretability, offering transparent insights into user modeling.

Technology Category

Application Category

πŸ“ Abstract
Large language models are increasingly used as personal assistants, yet most lack a persistent user model, forcing users to repeatedly restate preferences across sessions. We propose Vector-Adapted Retrieval Scoring (VARS), a pipeline-agnostic, frozen-backbone framework that represents each user with long-term and short-term vectors in a shared preference space and uses these vectors to bias retrieval scoring over structured preference memory. The vectors are updated online from weak scalar rewards from users' feedback, enabling personalization without per-user fine-tuning. We evaluate on \textsc{MultiSessionCollab}, an online multi-session collaboration benchmark with rich user preference profiles, across math and code tasks. Under frozen backbones, the main benefit of user-aware retrieval is improved interaction efficiency rather than large gains in raw task accuracy: our full VARS agent achieves the strongest overall performance, matches a strong Reflection baseline in task success, and reduces timeout rate and user effort. The learned long-term vectors also align with cross-user preference overlap, while short-term vectors capture session-specific adaptation, supporting the interpretability of the dual-vector design. Code, model, and data are available at https://github.com/YurenHao0426/VARS.
Problem

Research questions and friction points this paper is trying to address.

User Preference Modeling
Conversational LLM Agents
Personalization
Persistent User Model
Retrieval-Augmented Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

user preference modeling
retrieval-augmented generation
personalization
weak reward learning
frozen LLM
πŸ”Ž Similar Papers
No similar papers found.