Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient personalization in multi-turn dialogues—particularly under cold-start conditions and limited contextual history—this paper proposes a multi-turn RLHF framework integrated with intrinsic motivation. The core innovation lies in formalizing “curiosity” as an optimizable reward signal that directly measures user modeling accuracy, enabling the dialogue agent to actively probe latent user preferences and personality traits, thereby transcending conventional personalization paradigms reliant on extensive historical interaction data. The method jointly leverages LLM-based user simulation, user representation learning, and multi-step intrinsic reward design. Evaluated on educational and fitness simulation tasks, our approach achieves a 23.6% improvement in user preference identification accuracy and accelerates personalized response adaptation by 1.8× over strong baselines, demonstrating significant gains in empathetic understanding and adaptive behavior.

Technology Category

Application Category

📝 Abstract
Effective conversational agents must be able to personalize their behavior to suit a user's preferences, personality, and attributes, whether they are assisting with writing tasks or operating in domains like education or healthcare. Current training methods like Reinforcement Learning from Human Feedback (RLHF) prioritize helpfulness and safety but fall short in fostering truly empathetic, adaptive, and personalized interactions. Traditional approaches to personalization often rely on extensive user history, limiting their effectiveness for new or context-limited users. To overcome these limitations, we propose to incorporate an intrinsic motivation to improve the conversational agents's model of the user as an additional reward alongside multi-turn RLHF. This reward mechanism encourages the agent to actively elicit user traits by optimizing conversations to increase the accuracy of its user model. Consequently, the policy agent can deliver more personalized interactions through obtaining more information about the user. We applied our method both education and fitness settings, where LLMs teach concepts or recommend personalized strategies based on users' hidden learning style or lifestyle attributes. Using LLM-simulated users, our approach outperformed a multi-turn RLHF baseline in revealing information about the users' preferences, and adapting to them.
Problem

Research questions and friction points this paper is trying to address.

Enhancing personalized dialogue with curiosity-driven rewards
Overcoming limitations of traditional user history-based personalization
Improving multi-turn RLHF for adaptive user trait elicitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intrinsic motivation enhances user model accuracy
Multi-turn RLHF with curiosity reward mechanism
LLM-simulated users validate adaptive personalization
🔎 Similar Papers
No similar papers found.