🤖 AI Summary
Large language models (LLMs) exhibit low strategy selection accuracy and preference bias in empathetic support conversations, while existing supervised fine-tuning (SFT) methods neglect inter-turn strategic trade-offs. To address this, we propose Chain-based Strategy Optimization (CSO), the first framework enabling turn-level strategy preference modeling, accompanied by ESC-Pro—a high-quality, fine-grained preference dataset specifically designed for emotional support conversations. CSO integrates Monte Carlo Tree Search (MCTS) for strategic exploration, strategy-response alignment training, and multi-stage LLM-driven optimization. Evaluated on models including LLaMA-3.1-8B, CSO achieves a 12.6% absolute improvement in strategy accuracy and reduces preference bias by 37.4% over standard SFT. Generated responses demonstrate significantly enhanced empathy and contextual appropriateness. Our core contributions are: (1) a novel turn-level strategy modeling paradigm, and (2) ESC-Pro—the first fine-grained, emotion-support-oriented preference dataset.
📝 Abstract
The growing emotional stress in modern society has increased the demand for Emotional Support Conversations (ESC). While Large Language Models (LLMs) show promise for ESC, they face two key challenges: (1) low strategy selection accuracy, and (2) preference bias, limiting their adaptability to emotional needs of users. Existing supervised fine-tuning (SFT) struggles to address these issues, as it rigidly trains models on single gold-standard responses without modeling nuanced strategy trade-offs. To overcome these limitations, we propose Chain-of-Strategy Optimization (CSO), a novel approach that optimizes strategy selection preferences at each dialogue turn. We first leverage Monte Carlo Tree Search to construct ESC-Pro, a high-quality preference dataset with turn-level strategy-response pairs. Training on ESC-Pro with CSO improves both strategy accuracy and bias mitigation, enabling LLMs to generate more empathetic and contextually appropriate responses. Experiments on LLaMA-3.1-8B, Gemma-2-9B, and Qwen2.5-7B demonstrate that CSO outperforms standard SFT, highlighting the efficacy of fine-grained, turn-level preference modeling in ESC.