🤖 AI Summary
Existing data pruning methods lack robustness against real-world, high-noise, large-scale datasets. To address this, we propose a robust data pruning algorithm based on geometric median (GM) matching: leveraging a “herding-style” greedy strategy, we construct a k-subset whose empirical mean asymptotically converges to the full-dataset GM—achieving, for the first time, a 1/2 optimal breakdown point and improving the theoretical convergence rate to O(1/k), surpassing the O(1/√k) limit of uniform sampling. Our method is distribution-agnostic and provably robust against arbitrary data corruptions, without requiring assumptions on noise distribution. Extensive experiments on multiple deep learning benchmarks demonstrate significant gains over state-of-the-art methods—particularly under high corruption rates (>30%) and aggressive pruning ratios (k/n < 5%). This work establishes a new robustness baseline for data pruning.
📝 Abstract
Large-scale data collections in the wild, are invariably noisy. Thus developing data pruning strategies that remain robust even in the presence of corruption is critical in practice. In this work, we propose Geometric Median ($gm$) Matching -- a herding style greedy algorithm that yields a $k$-subset such that the mean of the subset approximates the geometric median of the (potentially) noisy dataset. Theoretically, we show that $gm$ Matching enjoys an improved $gO(1/k)$ scaling over $gO(1/sqrt{k})$ scaling of uniform sampling; while achieving {f optimal breakdown point} of {f 1/2} even under {f arbitrary} corruption. Extensive experiments across several popular deep learning benchmarks indicate that $gm$ Matching consistently improves over prior state-of-the-art; the gains become more profound at high rates of corruption and aggressive pruning rates; making $gm$ Matching a strong baseline for future research in robust data pruning.