Maximum Mean Discrepancy with Unequal Sample Sizes via Generalized U-Statistics

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional maximum mean discrepancy (MMD) two-sample tests require equal sample sizes, forcing data discarding in practice and reducing statistical power. Method: We establish, for the first time, the asymptotic normality of the MMD estimator under unequal sample sizes—breaking the long-standing reliance on balanced sampling. We propose a power-optimization criterion based on generalized U-statistics to enable high-power testing using all available data. Contributions: We identify a novel phenomenon: MMD estimator degeneracy does not necessarily imply zero population MMD, correcting a common misconception. We derive a more concise and precise variance characterization and rigorously prove statistical consistency and computational feasibility under non-proportional sampling. The proposed framework combines theoretical rigor with practical robustness, substantially enhancing MMD’s applicability and efficacy in real-world scenarios.

Technology Category

Application Category

📝 Abstract
Existing two-sample testing techniques, particularly those based on choosing a kernel for the Maximum Mean Discrepancy (MMD), often assume equal sample sizes from the two distributions. Applying these methods in practice can require discarding valuable data, unnecessarily reducing test power. We address this long-standing limitation by extending the theory of generalized U-statistics and applying it to the usual MMD estimator, resulting in new characterization of the asymptotic distributions of the MMD estimator with unequal sample sizes (particularly outside the proportional regimes required by previous partial results). This generalization also provides a new criterion for optimizing the power of an MMD test with unequal sample sizes. Our approach preserves all available data, enhancing test accuracy and applicability in realistic settings. Along the way, we give much cleaner characterizations of the variance of MMD estimators, revealing something that might be surprising to those in the area: while zero MMD implies a degenerate estimator, it is sometimes possible to have a degenerate estimator with nonzero MMD as well; we give a construction and a proof that it does not happen in common situations.
Problem

Research questions and friction points this paper is trying to address.

Extends MMD testing to handle unequal sample sizes without discarding data.
Develops new asymptotic distribution theory for MMD with unequal samples.
Provides a criterion to optimize test power in unequal sample scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends generalized U-statistics for unequal sample sizes
Preserves all data to enhance test accuracy
Provides new criterion for optimizing test power
🔎 Similar Papers
No similar papers found.
A
Aaron Wei
University of British Columbia
M
Milad Jalali
Independent researcher, Vancouver, Canada
Danica J. Sutherland
Danica J. Sutherland
University of British Columbia + Amii
Machine Learning