🤖 AI Summary
This paper addresses the challenge of evaluating and selecting appropriate clustering algorithms—K-means, DBSCAN, and spectral clustering—for high-dimensional data. We propose a unified unsupervised evaluation framework that integrates multiple dimensionality reduction techniques (PCA, t-SNE, UMAP) with composite validity metrics (silhouette coefficient, Adjusted Rand Index, Normalized Mutual Information). For the first time, we systematically uncover synergistic interactions between dimensionality reduction methods and clustering algorithms: UMAP preprocessing significantly enhances spectral clustering performance on complex manifold-structured data (e.g., MNIST, Fashion-MNIST, UCI HAR); K-means maintains superior computational efficiency; and DBSCAN excels at detecting irregularly shaped clusters. The framework provides reproducible, interpretable, and empirically grounded guidance for algorithm selection in high-dimensional clustering tasks.
📝 Abstract
This paper presents a comprehensive comparative analysis of prominent clustering algorithms K-means, DBSCAN, and Spectral Clustering on high-dimensional datasets. We introduce a novel evaluation framework that assesses clustering performance across multiple dimensionality reduction techniques (PCA, t-SNE, and UMAP) using diverse quantitative metrics. Experiments conducted on MNIST, Fashion-MNIST, and UCI HAR datasets reveal that preprocessing with UMAP consistently improves clustering quality across all algorithms, with Spectral Clustering demonstrating superior performance on complex manifold structures. Our findings show that algorithm selection should be guided by data characteristics, with Kmeans excelling in computational efficiency, DBSCAN in handling irregular clusters, and Spectral Clustering in capturing complex relationships. This research contributes a systematic approach for evaluating and selecting clustering techniques for high dimensional data applications.