🤖 AI Summary
In emotional support dialogues, ambiguous supporter intentions often lead to strategy misalignment and empathy deficits. To address this, we propose an intention-centered framework: (1) explicitly defining supporter intention categories; (2) modeling seeker multidimensional affective states—including emotional intensity, need type, and cognitive load; and (3) establishing an intention–state–strategy mapping mechanism. We introduce Intention-Centered Chain-of-Thought (ICECoT), a novel reasoning paradigm enabling large language models to explicitly simulate human-like intention inference. Furthermore, we design an expert-knowledge-augmented automated annotation pipeline and a multi-dimensional evaluation suite assessing strategy appropriateness, empathy, safety, and more. Our approach achieves significant improvements over state-of-the-art methods across multiple benchmarks. To foster reproducible research, we publicly release both the dataset and code.
📝 Abstract
In emotional support conversations, unclear intentions can lead supporters to employ inappropriate strategies, inadvertently imposing their expectations or solutions on the seeker. Clearly defined intentions are essential for guiding both the supporter's motivations and the overall emotional support process. In this paper, we propose the Intention-centered Emotional Support Conversation (IntentionESC) framework, which defines the possible intentions of supporters in emotional support conversations, identifies key emotional state aspects for inferring these intentions, and maps them to appropriate support strategies. While Large Language Models (LLMs) excel in text generating, they fundamentally operate as probabilistic models trained on extensive datasets, lacking a true understanding of human thought processes and intentions. To address this limitation, we introduce the Intention Centric Chain-of-Thought (ICECoT) mechanism. ICECoT enables LLMs to mimic human reasoning by analyzing emotional states, inferring intentions, and selecting suitable support strategies, thereby generating more effective emotional support responses. To train the model with ICECoT and integrate expert knowledge, we design an automated annotation pipeline that produces high-quality training data. Furthermore, we develop a comprehensive evaluation scheme to assess emotional support efficacy and conduct extensive experiments to validate our framework. Our data and code are available at https://github.com/43zxj/IntentionESC_ICECoT.