Kernel Tests of Equivalence

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional goodness-of-fit tests struggle to distinguish between “no significant difference” and “practical equivalence,” as failure to reject the null hypothesis may merely reflect insufficient test power. This work proposes the first kernel-based framework for full-distribution equivalence testing, leveraging Kernel Stein Discrepancy (KSD) and Maximum Mean Discrepancy (MMD) to quantify the distance between distributions while incorporating a prespecified minimum equivalence margin. By employing asymptotic normal approximations and bootstrap procedures to compute critical values, the method overcomes the limitations of existing equivalence tests, which are typically confined to parametric models or specific moments. Numerical experiments demonstrate that the proposed approach reliably assesses whether two distributions are equivalent within the specified margin while effectively controlling both Type I and Type II error rates.

Technology Category

Application Category

📝 Abstract
We propose novel kernel-based tests for assessing the equivalence between distributions. Traditional goodness-of-fit testing is inappropriate for concluding the absence of distributional differences, because failure to reject the null hypothesis may simply be a result of lack of test power, also known as the Type-II error. This motivates \emph{equivalence testing}, which aims to assess the \emph{absence} of a statistically meaningful effect under controlled error rates. However, existing equivalence tests are either limited to parametric distributions or focus only on specific moments rather than the full distribution. We address these limitations using two kernel-based statistical discrepancies: the \emph{kernel Stein discrepancy} and the \emph{Maximum Mean Discrepancy}. The null hypothesis of our proposed tests assumes the candidate distribution differs from the nominal distribution by at least a pre-defined margin, which is measured by these discrepancies. We propose two approaches for computing the critical values of the tests, one using an asymptotic normality approximation, and another based on bootstrapping. Numerical experiments are conducted to assess the performance of these tests.
Problem

Research questions and friction points this paper is trying to address.

equivalence testing
distributional equivalence
kernel methods
statistical discrepancy
nonparametric testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

kernel Stein discrepancy
Maximum Mean Discrepancy
equivalence testing
nonparametric distribution comparison
bootstrap critical values
🔎 Similar Papers
No similar papers found.