🤖 AI Summary
Existing intent clustering methods for customer service dialogues suffer from significant misalignment with human semantic cognition due to overreliance on embedding distances. To address this, we propose an LLM-in-the-loop intent clustering framework that performs iterative semantic evaluation, clustering refinement, and automatic cluster naming via fine-tuned large language models (LLMs), augmented with domain-specific techniques—namely utterance normalization and intent boundary identification. Our contributions are threefold: (1) the first LLM-guided closed-loop clustering paradigm; (2) the largest publicly available Chinese customer-service intent dataset, comprising over 100,000 call transcripts and 1,507 human-annotated intent clusters; and (3) state-of-the-art performance—achieving >95% accuracy in cluster naming and semantic coherence assessment, substantially outperforming baselines in clustering quality and boosting downstream intent classification accuracy by 12%. The code and dataset are open-sourced to advance human-aligned, interpretable natural language understanding research.
📝 Abstract
Discovering customer intentions in dialogue conversations is crucial for automated service agents. Yet, existing intent clustering methods often fail to align with human perceptions due to the heavy reliance on embedding distance metrics and sentence embeddings. To address these limitations, we propose integrating the semantic understanding capabilities of LLMs into an $ extbf{LLM-in-the-loop (LLM-ITL)}$ intent clustering framework. Specifically, this paper (1) investigates the effectiveness of fine-tuned LLMs in semantic coherence evaluation and intent cluster naming, achieving over 95% accuracy; (2) designs an LLM-ITL clustering algorithm that facilitates the iterative discovery of coherent intent clusters; and (3) proposes task-specific techniques tailored for customer service dialogue intent clustering. Since existing English benchmarks pose limited semantic diversity and intent labels, we introduced a comprehensive Chinese dialogue intent dataset, comprising over 100,000 real customer service calls and 1,507 human-annotated intent clusters. The proposed approaches significantly outperformed LLM-guided baselines, achieving notable improvements in clustering quality and a 12% boost in the downstream intent classification task. Combined with several best practices, our findings highlight the potential of LLM-in-the-loop techniques for scalable and human-aligned problem-solving. Sample code and datasets are available at: https://anonymous.4open.science/r/Dial-in-LLM-0410.