DUAL: Learning Diverse Kernels for Aggregated Two-sample and Independence Testing

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multiple-kernel aggregation methods for kernel-based two-sample and independence tests on complex structured data suffer from information redundancy due to high similarity among candidate kernels, thereby diminishing statistical power. To address this, we propose a novel multiple-kernel selection framework that jointly optimizes kernel diversity—measured via inter-kernel covariance—and individual test power, while incorporating selective inference to rigorously control Type-I error. Theoretical analysis establishes consistency and asymptotic efficiency of the resulting test. Empirical evaluation across diverse benchmark tasks demonstrates significant improvements in statistical power and robustness over state-of-the-art methods. Our core contribution is the first unified integration of diversity regularization, selective inference, and asymptotic error control within a multiple-kernel testing framework.

Technology Category

Application Category

📝 Abstract
To adapt kernel two-sample and independence testing to complex structured data, aggregation of multiple kernels is frequently employed to boost testing power compared to single-kernel tests. However, we observe a phenomenon that directly maximizing multiple kernel-based statistics may result in highly similar kernels that capture highly overlapping information, limiting the effectiveness of aggregation. To address this, we propose an aggregated statistic that explicitly incorporates kernel diversity based on the covariance between different kernels. Moreover, we identify a fundamental challenge: a trade-off between the diversity among kernels and the test power of individual kernels, i.e., the selected kernels should be both effective and diverse. This motivates a testing framework with selection inference, which leverages information from the training phase to select kernels with strong individual performance from the learned diverse kernel pool. We provide rigorous theoretical statements and proofs to show the consistency on the test power and control of Type-I error, along with asymptotic analysis of the proposed statistics. Lastly, we conducted extensive empirical experiments demonstrating the superior performance of our proposed approach across various benchmarks for both two-sample and independence testing.
Problem

Research questions and friction points this paper is trying to address.

Maximizing multiple kernel statistics yields redundant overlapping information
Addressing trade-off between kernel diversity and individual test power
Proposing aggregated statistic incorporating kernel covariance for improved testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicitly incorporates kernel diversity via covariance
Balances kernel diversity with individual test power
Selects effective kernels from learned diverse pool
🔎 Similar Papers
No similar papers found.