🤖 AI Summary
This paper studies the construction of $(k,z)$-clustering coresets under input data corrupted by random noise drawn from a known distribution. Since the underlying ground-truth data is unobservable, conventional cost-based evaluation criteria fail. To address this, we propose a novel proxy error measure that is distribution-agnostic yet adapts to noise magnitude, yielding provably tighter theoretical bounds than classical measures under mild assumptions—reducing coreset size by a $mathrm{poly}(k)$ factor optimally. Based on this measure, we design a provably correct coreset construction algorithm integrating stochastic noise modeling, clustering-cost sensitivity analysis, and tight error-bound derivation. Theoretical analysis establishes both smaller coreset size and stronger approximation guarantees. Empirical evaluation on real-world datasets confirms the method’s effectiveness and practicality.
📝 Abstract
We study the problem of constructing coresets for $(k, z)$-clustering when the input dataset is corrupted by stochastic noise drawn from a known distribution. In this setting, evaluating the quality of a coreset is inherently challenging, as the true underlying dataset is unobserved. To address this, we investigate coreset construction using surrogate error metrics that are tractable and provably related to the true clustering cost. We analyze a traditional metric from prior work and introduce a new error metric that more closely aligns with the true cost. Although our metric is defined independently of the noise distribution, it enables approximation guarantees that scale with the noise level. We design a coreset construction algorithm based on this metric and show that, under mild assumptions on the data and noise, enforcing an $varepsilon$-bound under our metric yields smaller coresets and tighter guarantees on the true clustering cost than those obtained via classical metrics. In particular, we prove that the coreset size can improve by a factor of up to $mathrm{poly}(k)$, where $n$ is the dataset size. Experiments on real-world datasets support our theoretical findings and demonstrate the practical advantages of our approach.