Dream to Chat: Model-based Reinforcement Learning on Dialogues with User Belief Modeling

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of unified modeling between user states (affect, emotion, intent) and future utterance prediction in dialogue systems, as well as poor cross-domain generalization. To this end, we propose DreamCUB: the first framework that unifies emotion, affect, and intent into a coherent user belief representation, and constructs a dialogue world model grounded in Partially Observable Markov Decision Processes (POMDP) and the information bottleneck principle. DreamCUB jointly optimizes a pretrained world model, policy network, and critic network to enable model-based reinforcement learning—balancing exploration and exploitation while supporting both in-domain adaptation and cross-domain transfer (e.g., empathetic dialogue). Experiments demonstrate state-of-the-art performance on emotion classification and recognition tasks, with significant improvements in dialogue coherence, empathy, and generalization across domains.

Technology Category

Application Category

📝 Abstract
World models have been widely utilized in robotics, gaming, and auto-driving. However, their applications on natural language tasks are relatively limited. In this paper, we construct the dialogue world model, which could predict the user's emotion, sentiment, and intention, and future utterances. By defining a POMDP, we argue emotion, sentiment and intention can be modeled as the user belief and solved by maximizing the information bottleneck. By this user belief modeling, we apply the model-based reinforcement learning framework to the dialogue system, and propose a framework called DreamCUB. Experiments show that the pretrained dialogue world model can achieve state-of-the-art performances on emotion classification and sentiment identification, while dialogue quality is also enhanced by joint training of the policy, critic and dialogue world model. Further analysis shows that this manner holds a reasonable exploration-exploitation balance and also transfers well to out-of-domain scenarios such as empathetic dialogues.
Problem

Research questions and friction points this paper is trying to address.

Model-based reinforcement learning for dialogue systems
Predicting user emotion, sentiment, and intention
Enhancing dialogue quality through belief modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based reinforcement learning for dialogue systems
User belief modeling via information bottleneck maximization
Joint training of policy, critic and world model
🔎 Similar Papers
No similar papers found.
Y
Yue Zhao
Geely AI Lab
X
Xiaoyu Wang
Geely AI Lab, Beijing Institute of Technology
D
Dan Wang
Geely AI Lab
Z
Zhonglin Jiang
Geely AI Lab
Q
Qingqing Gu
Geely AI Lab
T
Teng Chen
Geely AI Lab
Ningyuan Xi
Ningyuan Xi
Beihang University
LLMNature Language ProcessingMachine Learning
J
Jinxian Qu
Geely AI Lab
Y
Yong Chen
Geely AI Lab
Luo Ji
Luo Ji
Alibaba Group
Reinforcement LearningAutomatic Control