🤖 AI Summary
Traditional k-means for text clustering relies on numerical embeddings, loses fine-grained semantics, and yields uninterpretable centroids. To address this, we propose *k-LLMmeans*, a novel k-means variant where natural language summaries—generated by large language models (LLMs)—serve as semantic cluster centers. Our method integrates document embedding alignment, mini-batch streaming updates, and LLM-based summarization, enabling real-time semantic evolution and human-readable centroid interpretation. Key contributions include: (i) introducing the “summary-as-centroid” paradigm, reconciling theoretical convergence guarantees with human interpretability; (ii) designing a lightweight online variant with constant LLM inference cost, independent of dataset size; and (iii) establishing the StackExchange streaming benchmark and demonstrating statistically significant improvements over standard k-means across multiple metrics. Case studies reveal that dynamically updated centroids follow coherent, semantically meaningful evolutionary trajectories.
📝 Abstract
We introduce k-LLMmeans, a novel modification of the k-means clustering algorithm that utilizes LLMs to generate textual summaries as cluster centroids, thereby capturing contextual and semantic nuances often lost when relying on purely numerical means of document embeddings. This modification preserves the properties of k-means while offering greater interpretability: the cluster centroid is represented by an LLM-generated summary, whose embedding guides cluster assignments. We also propose a mini-batch variant, enabling efficient online clustering for streaming text data and providing real-time interpretability of evolving cluster centroids. Through extensive simulations, we show that our methods outperform vanilla k-means on multiple metrics while incurring only modest LLM usage that does not scale with dataset size. Finally, We present a case study showcasing the interpretability of evolving cluster centroids in sequential text streams. As part of our evaluation, we compile a new dataset from StackExchange, offering a benchmark for text-stream clustering.