ClusterFusion: Hybrid Clustering with Embedding Guidance and LLM Adaptation

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional clustering methods exhibit limited performance in domain-specific scenarios when relying on pretrained embeddings, while fine-tuning incurs high computational costs. Existing LLM-augmented approaches typically treat large language models as auxiliary components, failing to fully harness their contextual reasoning capabilities. This paper introduces ClusterFusion—the first hybrid text clustering framework that places an LLM at the core of clustering, guided by lightweight embeddings. It operates through three sequential stages: subset partitioning, thematic induction, and topic assignment—thereby deeply integrating domain knowledge and user preferences. Its key innovation lies in abandoning the conventional LLM post-processing paradigm in favor of prompt-engineering-driven, interpretable theme generation and fine-grained text assignment. Extensive evaluation on three public benchmarks and two newly constructed domain-specific datasets demonstrates significant improvements over state-of-the-art methods, especially in specialized domains. The code, datasets, and full experimental results are publicly released.

Technology Category

Application Category

📝 Abstract
Text clustering is a fundamental task in natural language processing, yet traditional clustering algorithms with pre-trained embeddings often struggle in domain-specific contexts without costly fine-tuning. Large language models (LLMs) provide strong contextual reasoning, yet prior work mainly uses them as auxiliary modules to refine embeddings or adjust cluster boundaries. We propose ClusterFusion, a hybrid framework that instead treats the LLM as the clustering core, guided by lightweight embedding methods. The framework proceeds in three stages: embedding-guided subset partition, LLM-driven topic summarization, and LLM-based topic assignment. This design enables direct incorporation of domain knowledge and user preferences, fully leveraging the contextual adaptability of LLMs. Experiments on three public benchmarks and two new domain-specific datasets demonstrate that ClusterFusion not only achieves state-of-the-art performance on standard tasks but also delivers substantial gains in specialized domains. To support future work, we release our newly constructed dataset and results on all benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Traditional clustering struggles with domain-specific text without costly fine-tuning.
Existing methods underutilize LLMs' contextual reasoning for core clustering tasks.
Current approaches lack integration of domain knowledge and user preferences effectively.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM as clustering core guided by embeddings
Three-stage hybrid framework for topic summarization
Directly incorporates domain knowledge and user preferences
🔎 Similar Papers
No similar papers found.