🤖 AI Summary
Short-text embedding clustering typically requires a pre-specified number of clusters, limiting its practicality in fully unsupervised settings. To address this, we propose a scalable spectral clustering framework that automatically estimates the optimal number of clusters by analyzing the eigenstructure of the Laplacian matrix—constructed from cosine similarity—eliminating manual specification. We further introduce Cohesion Ratio, an information-theoretic cohesiveness metric highly correlated with external evaluation measures, enabling reliable unsupervised quality assessment. Our method is compatible with standard clustering algorithms (e.g., K-Means, Hierarchical Agglomerative Clustering) and incorporates adaptive sampling for computational efficiency. Extensive experiments across six short-text benchmarks and four embedding models demonstrate that our approach significantly outperforms mainstream parameter-free methods—including HDBSCAN and OPTICS—in clustering accuracy, while maintaining high efficiency, robustness, and broad applicability.
📝 Abstract
Clustering short text embeddings is a foundational task in natural language processing, yet remains challenging due to the need to specify the number of clusters in advance. We introduce a scalable spectral method that estimates the number of clusters directly from the structure of the Laplacian eigenspectrum, constructed using cosine similarities and guided by an adaptive sampling strategy. This sampling approach enables our estimator to efficiently scale to large datasets without sacrificing reliability. To support intrinsic evaluation of cluster quality without ground-truth labels, we propose the Cohesion Ratio, a simple and interpretable evaluation metric that quantifies how much intra-cluster similarity exceeds the global similarity background. It has an information-theoretic motivation inspired by mutual information, and in our experiments it correlates closely with extrinsic measures such as normalized mutual information and homogeneity. Extensive experiments on six short-text datasets and four modern embedding models show that standard algorithms like K-Means and HAC, when guided by our estimator, significantly outperform popular parameter-light methods such as HDBSCAN, OPTICS, and Leiden. These results demonstrate the practical value of our spectral estimator and Cohesion Ratio for unsupervised organization and evaluation of short text data. Implementation of our estimator of k and Cohesion Ratio, along with code for reproducing the experiments, is available at https://anonymous.4open.science/r/towards_clustering-0C2E.