๐ค AI Summary
Existing kernel thinning algorithms (e.g., Kernel Halving, Compress) provide theoretical guarantees only under restrictive distributional and kernel assumptions, and their performance degrades severely with increasing dimensionality. This work introduces the first general low-rank thinning framework applicable to arbitrary distributions and arbitrary kernels, eliminating reliance on specific distribution families or kernel structures. Leveraging sub-Gaussian sampling theory, low-rank matrix analysis, and approximate kernel decomposition, we design a scalable thinning algorithm that integrates randomized reordering with low-rank approximation. We theoretically establish that our method achieves high-fidelity compression when the kernel or data matrix is approximately low-rank, substantially mitigating the curse of dimensionality. Empirically, it attains state-of-the-art theoretical guarantees in transformer attention approximation, SGD acceleration, and two-sample testingโwhile maintaining near-linear time complexity and strong generalization across diverse settings.
๐ Abstract
The goal in thinning is to summarize a dataset using a small set of representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel Halving and Compress can match the quality of uniform subsampling while substantially reducing the number of summary points. However, existing guarantees cover only a restricted range of distributions and kernel-based quality measures and suffer from pessimistic dimension dependence. To address these deficiencies, we introduce a new low-rank analysis of sub-Gaussian thinning that applies to any distribution and any kernel, guaranteeing high-quality compression whenever the kernel or data matrix is approximately low-rank. To demonstrate the broad applicability of the techniques, we design practical sub-Gaussian thinning approaches that improve upon the best known guarantees for approximating attention in transformers, accelerating stochastic gradient training through reordering, and distinguishing distributions in near-linear time.