UserLM-R1: Modeling Human Reasoning in User Language Models with Multi-Reward Reinforcement Learning

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing user simulators, which rely on static, context-agnostic user profiles that struggle to generalize to novel scenarios and are vulnerable to manipulation by agents due to their lack of strategic reasoning. To overcome these challenges, we propose a novel user profiling framework that integrates static personas with dynamic goals, enabling goal-driven, strategic responses through a dedicated reasoning mechanism. We further enhance the decision-making capabilities of the user language model via supervised fine-tuning and multi-reward reinforcement learning. As the first approach to incorporate dynamic goals and multi-reward reinforcement learning into user simulation, our method significantly outperforms current state-of-the-art techniques across multiple benchmarks, demonstrating exceptional cross-scenario generalization and robustness against adversarial manipulation.

Technology Category

Application Category

📝 Abstract
User simulators serve as the critical interactive environment for agent post-training, and an ideal user simulator generalizes across domains and proactively engages in negotiation by challenging or bargaining. However, current methods exhibit two issues. They rely on static and context-unaware profiles, necessitating extensive manual redesign for new scenarios, thus limiting generalizability. Moreover, they neglect human strategic thinking, leading to vulnerability to agent manipulation. To address these issues, we propose UserLM-R1, a novel user language model with reasoning capability. Specifically, we first construct comprehensive user profiles with both static roles and dynamic scenario-specific goals for adaptation to diverse scenarios. Then, we propose a goal-driven decision-making policy to generate high-quality rationales before producing responses, and further refine the reasoning and improve strategic capabilities with supervised fine-tuning and multi-reward reinforcement learning. Extensive experimental results demonstrate that UserLM-R1 outperforms competitive baselines, particularly on the more challenging adversarial set.
Problem

Research questions and friction points this paper is trying to address.

user simulator
generalizability
strategic reasoning
human-like negotiation
context-awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

User Language Model
Multi-Reward Reinforcement Learning
Goal-Driven Reasoning
Dynamic User Profiling
Strategic Interaction
🔎 Similar Papers
No similar papers found.
F
Feng Zhang
Meituan, Peking University
S
Shijia Li
Meituan, Beijing University of Posts and Telecommunications
C
Chunmao Zhang
Meituan, University of Chinese Academy of Sciences
Zhanyu Ma
Zhanyu Ma
Beijing University of Posts and Telecommunications
Pattern RecognitionMachine LearningComputer VisionMultimedia TechnologyDeep Learning
J
Jun Xu
Meituan
J
Jiuchong Gao
Meituan
J
Jinghua Hao
Meituan
R
Renqing He
Meituan
J
Jingwen Xu
Meituan
Han Liu
Han Liu
Associate Professor, Dalian University of Technology
Aritificial IntelligenceMachine LearningData Mining