Beyond Whole Dialogue Modeling: Contextual Disentanglement for Conversational Recommendation

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing conversational recommendation systems model dialogue context holistically, making it difficult to distinguish focal intent from background information—leading to inaccurate user demand inference. To address this, we propose a dual-decoupling framework: first, unsupervised contextual representation disentanglement via self-supervised contrastive learning; second, explicit semantic separation of focal and background components through counterfactual reasoning. Furthermore, we design an adaptive prompt learning module that seamlessly interfaces with large language models (LLMs) to enhance both intent understanding and response generation. Evaluated on two public benchmarks, our method achieves state-of-the-art performance in both item recommendation and response generation tasks. Notably, it is the first approach to enable *unsupervised* and *interpretable* disentanglement of dialogue context—establishing a novel paradigm for personalized conversational recommendation.

Technology Category

Application Category

📝 Abstract
Conversational recommender systems aim to provide personalized recommendations by analyzing and utilizing contextual information related to dialogue. However, existing methods typically model the dialogue context as a whole, neglecting the inherent complexity and entanglement within the dialogue. Specifically, a dialogue comprises both focus information and background information, which mutually influence each other. Current methods tend to model these two types of information mixedly, leading to misinterpretation of users' actual needs, thereby lowering the accuracy of recommendations. To address this issue, this paper proposes a novel model to introduce contextual disentanglement for improving conversational recommender systems, named DisenCRS. The proposed model DisenCRS employs a dual disentanglement framework, including self-supervised contrastive disentanglement and counterfactual inference disentanglement, to effectively distinguish focus information and background information from the dialogue context under unsupervised conditions. Moreover, we design an adaptive prompt learning module to automatically select the most suitable prompt based on the specific dialogue context, fully leveraging the power of large language models. Experimental results on two widely used public datasets demonstrate that DisenCRS significantly outperforms existing conversational recommendation models, achieving superior performance on both item recommendation and response generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Disentangle focus and background info in dialogue
Improve accuracy of conversational recommender systems
Leverage large language models for adaptive prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual disentanglement framework for context separation
Self-supervised contrastive disentanglement technique
Adaptive prompt learning for large language models
🔎 Similar Papers
No similar papers found.
G
Guojia An
University of Electronic Science and Technology of China, Chengdu, Sichuan, China
J
Jie Zou
University of Electronic Science and Technology of China, Chengdu, Sichuan, China
Jiwei Wei
Jiwei Wei
Professor at University of Electronic Science and Technology of China (UESTC)
Cross-Modal RetrievalMetric LearningAdversarial Machine LearningAIGC
Chaoning Zhang
Chaoning Zhang
Professor at UESTC (电子科技大学, China)
Computer VisionLLM and VLMGenAI and AIGC Detection
F
Fuming Sun
Dalian Minzu University, Dalian, Liaoning, China
Y
Yang Yang
University of Electronic Science and Technology of China, Chengdu, Sichuan, China