A Permutation-free Kernel Two-Sample Test

📅 2022-11-27
🏛️ Neural Information Processing Systems
📈 Citations: 26
Influential: 2
📄 PDF
🤖 AI Summary
Traditional kernel Maximum Mean Discrepancy (MMD) two-sample tests rely on permutation to determine critical thresholds, ensuring finite-sample validity but incurring an O(n²) computational cost per permutation—prohibitively expensive for large samples. This paper proposes the cross-MMD test statistic: by splitting samples to construct a U-statistic, and combining studentization with a Gaussian kernel, it yields the first kernel MMD test that requires no permutations. The method achieves asymptotic normality with a single O(n²) computation, while preserving finite-sample validity, statistical consistency, and minimax optimal detection rates under local alternatives. Theoretically and empirically, cross-MMD accelerates testing by over an order of magnitude compared to permutation-based approaches on large samples, with only a marginal loss in power, and maintains strong consistency against any fixed distributional discrepancy.
📝 Abstract
The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions that has found utility in two-sample testing. The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution. Hence, to design a level-$alpha$ test, one usually selects the rejection threshold as the $(1-alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since every permutation takes quadratic time. We propose the cross-MMD, a new quadratic-time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, the cross-MMD has a limiting standard Gaussian distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power.
Problem

Research questions and friction points this paper is trying to address.

Proposes cross-MMD for efficient two-sample testing
Avoids computationally expensive permutation methods
Achieves asymptotic normality and minimax optimal power
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-MMD statistic with sample-splitting
Studentization for Gaussian null distribution
Quadratic-time algorithm maintaining optimal power
🔎 Similar Papers
No similar papers found.