🤖 AI Summary
This work addresses the failure of Lloyd’s k-means algorithm in high-dimensional, high-noise settings, where nearly all initial partitions become fixed points, preventing recovery of clearly separable cluster structures. Through probabilistic analysis and theoretical proof, the study reveals a fundamental divergence in the high-dimensional behavior of Lloyd’s and Hartigan’s k-means algorithms: while Lloyd’s method degenerates into merely returning its initialization, Hartigan’s algorithm remains capable of converging to the correct clustering. This finding not only explains the frequent empirical failure of standard k-means in high dimensions but also establishes, for the first time, the theoretical superiority of Hartigan’s approach in terms of robustness under such challenging conditions.
📝 Abstract
Lloyd's k-means algorithm is one of the most widely used clustering methods. We prove that in high-dimensional, high-noise settings, the algorithm exhibits catastrophic failure: with high probability, essentially every partition of the data is a fixed point. Consequently, Lloyd's algorithm simply returns its initial partition - even when the underlying clusters are trivially recoverable by other methods. In contrast, we prove that Hartigan's k-means algorithm does not exhibit this pathology. Our results show the stark difference between these algorithms and offer a theoretical explanation for the empirical difficulties often observed with k-means in high dimensions.