🤖 AI Summary
This paper addresses the problem of efficiently computing constant-factor approximations to $k$-means clustering for massive datasets in the Massively Parallel Computation (MPC) model. Methodologically, it introduces the first distributed algorithm achieving a constant approximation ratio in $o(log n)$ rounds—specifically, $O(log log n cdot log log log n)$ communication rounds—by embedding an LMP (Lagrange Multiplier Preserving) facility location approximation into the Jain–Vazirani framework, thereby establishing a provably approximate reduction from $k$-means to facility location. The algorithm employs distributed memory management with per-machine space $O(n^sigma)$ and total space $O(n^{1+varepsilon})$. Its key contribution is breaking the $Omega(log n)$ round lower bound inherent to standard iterative approaches, while simultaneously guaranteeing constant approximation quality and strong scalability. This yields a new paradigm for ultra-large-scale clustering that unifies theoretical optimality with practical efficiency.
📝 Abstract
In this paper, we present an efficient massively parallel approximation algorithm for the $k$-means problem. Specifically, we provide an MPC algorithm that computes a constant-factor approximation to an arbitrary $k$-means instance in $O(loglog n cdot logloglog n)$ rounds. The algorithm uses $O(n^σ)$ bits of memory per machine, where $σ> 0$ is a constant that can be made arbitrarily small. The global memory usage is $O(n^{1+varepsilon})$ bits for an arbitrarily small constant $varepsilon > 0$, and is thus only slightly superlinear. Recently, Czumaj, Gao, Jiang, Krauthgamer, and Veselý showed that a constant-factor bicriteria approximation can be computed in $O(1)$ rounds in the MPC model. However, our algorithm is the first constant-factor approximation for the general $k$-means problem that runs in $o(log n)$ rounds in the MPC model.
Our approach builds upon the foundational framework of Jain and Vazirani. The core component of our algorithm is a constant-factor approximation for the related facility location problem. While such an approximation was already achieved in constant time in the work of Czumaj et al. mentioned above, our version additionally satisfies the so-called Lagrangian Multiplier Preserving (LMP) property. This property enables the transformation of a facility location approximation into a comparably good $k$-means approximation.