🤖 AI Summary
To address the limited discriminative power of Maximum Mean Discrepancy (MMD) in goodness-of-fit testing, this paper proposes a spectral-filtering-based regularized kernel discrepancy framework. The method constructs flexible test statistics via integral operators, relaxing stringent assumptions on kernels and filter functions inherent in prior approaches, and achieves rigorous Type-I error control with improved statistical power in non-asymptotic settings. Theoretically, the proposed test constitutes a natural generalization of existing MMD-based tests, offering enhanced detection sensitivity and broader theoretical applicability. Empirical evaluations demonstrate that it matches or outperforms state-of-the-art methods across diverse scenarios—including multivariate, high-dimensional, and small-sample settings—exhibiting strong practical adaptability and competitive performance.
📝 Abstract
The widespread adoption of the emph{maximum mean discrepancy} (MMD) in goodness-of-fit testing has spurred extensive research on its statistical performance. However, recent studies indicate that the inherent structure of MMD may constrain its ability to distinguish between distributions, leaving room for improvement. Regularization techniques have the potential to overcome this limitation by refining the discrepancy measure. In this paper, we introduce a family of regularized kernel-based discrepancy measures constructed via spectral filtering. Our framework can be regarded as a natural generalization of prior studies, removing restrictive assumptions on both kernel functions and filter functions, thereby broadening the methodological scope and the theoretical inclusiveness. We establish non-asymptotic guarantees showing that the resulting tests achieve valid Type~I error control and enhanced power performance. Numerical experiments are conducted to demonstrate the broader generality and competitive performance of the proposed tests compared with existing methods.