Provably faster randomized and quantum algorithms for k-means clustering via uniform sampling

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Classical k-means suffers from O(n) per-iteration computational complexity on large-scale datasets, posing a scalability bottleneck. Method: This paper proposes a classical-quantum hybrid algorithm based on uniform sampling—distinct from conventional norm-weighted sampling—to preserve the intrinsic symmetry of the k-means objective function and avoid theoretical degradation. By integrating randomized mini-batch optimization with quantum amplitude estimation, the algorithm achieves only O(log n) data accesses per iteration. Contribution/Results: We prove that the proposed algorithm reduces iteration complexity to O(log n), outperforming existing quantum clustering algorithms such as q-means. Under an ε-approximate optimality guarantee for the clustering solution, it delivers provable exponential speedup. Empirical evaluations confirm its efficiency and robustness on large-scale datasets.

Technology Category

Application Category

📝 Abstract
The $k$-means algorithm (Lloyd's algorithm) is a widely used method for clustering unlabeled data. A key bottleneck of the $k$-means algorithm is that each iteration requires time linear in the number of data points, which can be expensive in big data applications. This was improved in recent works proposing quantum and quantum-inspired classical algorithms to approximate the $k$-means algorithm locally, in time depending only logarithmically on the number of data points (along with data dependent parameters) [$q$-means: A quantum algorithm for unsupervised machine learning; Kerenidis, Landman, Luongo, and Prakash, NeurIPS 2019; Do you know what $q$-means?, Doriguello, Luongo, Tang]. In this work, we describe a simple randomized mini-batch $k$-means algorithm and a quantum algorithm inspired by the classical algorithm. We prove worse-case guarantees that significantly improve upon the bounds for previous algorithms. Our improvements are due to a careful use of uniform sampling, which preserves certain symmetries of the $k$-means problem that are not preserved in previous algorithms that use data norm-based sampling.
Problem

Research questions and friction points this paper is trying to address.

Speed up k-means clustering via uniform sampling
Reduce time complexity from linear to logarithmic
Improve quantum and classical mini-batch algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized mini-batch k-means algorithm
Quantum algorithm inspired by classical
Uniform sampling preserves symmetries
🔎 Similar Papers
No similar papers found.