Dimension Reduction for Clustering: The Curious Case of Discrete Centers

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies dimensionality reduction for discrete center clustering—where cluster centers must be selected from the input dataset. Addressing the inadequacy of classical Johnson–Lindenstrauss embeddings under discrete center constraints, we introduce the doubling dimension to this setting and propose a cost-preserving metric embedding. We prove that $Oig(varepsilon^{-2}(ddim + log k + loglog n)ig)$ dimensions suffice to $(1pmvarepsilon)$-approximate the cost of any $k$-center clustering. By relaxing the stability condition on optimal solutions, we eliminate the $loglog n$ term, achieving an almost-optimal dimension bound of $Oig(varepsilon^{-2}(ddim + log k)ig)$. Our results extend classical dimensionality reduction theory to discrete center clustering, providing tight, constructive guarantees for compressing high-dimensional data while preserving clustering cost structure—enabling efficient and theoretically sound discrete $k$-center clustering in reduced dimensions.

Technology Category

Application Category

📝 Abstract
The Johnson-Lindenstrauss transform is a fundamental method for dimension reduction in Euclidean spaces, that can map any dataset of $n$ points into dimension $O(log n)$ with low distortion of their distances. This dimension bound is tight in general, but one can bypass it for specific problems. Indeed, tremendous progress has been made for clustering problems, especially in the emph{continuous} setting where centers can be picked from the ambient space $mathbb{R}^d$. Most notably, for $k$-median and $k$-means, the dimension bound was improved to $O(log k)$ [Makarychev, Makarychev and Razenshteyn, STOC 2019]. We explore dimension reduction for clustering in the emph{discrete} setting, where centers can only be picked from the dataset, and present two results that are both parameterized by the doubling dimension of the dataset, denoted as $operatorname{ddim}$. The first result shows that dimension $O_ε(operatorname{ddim} + log k + loglog n)$ suffices, and is moreover tight, to guarantee that the cost is preserved within factor $1pmε$ for every set of centers. Our second result eliminates the $loglog n$ term in the dimension through a relaxation of the guarantee (namely, preserving the cost only for all approximately-optimal sets of centers), which maintains its usefulness for downstream applications. Overall, we achieve strong dimension reduction in the discrete setting, and find that it differs from the continuous setting not only in the dimension bound, which depends on the doubling dimension, but also in the guarantees beyond preserving the optimal value, such as which clusterings are preserved.
Problem

Research questions and friction points this paper is trying to address.

Dimension reduction for discrete clustering with centers from dataset
Tight dimension bound based on doubling dimension and k
Eliminating log log n term by relaxing cost guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dimension reduction using doubling dimension parameter
Tight bound with O_ε(ddim + log k + log log n)
Relaxed guarantee eliminates log log n term
🔎 Similar Papers
No similar papers found.