Regularized $f$-Divergence Kernel Tests

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an adaptive kernel two-sample test applicable to a broad class of $f$-divergences. By introducing a regularized variational representation, the method unifies divergence estimation through witness functions and adaptively selects both kernel bandwidth and regularization parameters. It is the first approach to integrate regularized variational forms of $f$-divergences with kernel-based two-sample testing, offering theoretical guarantees on statistical power while introducing a relative testing strategy tailored for machine unlearning evaluation. Empirical results demonstrate that different $f$-divergences—such as the Hockey-Stick divergence—exhibit distinct sensitivities to local distributional discrepancies, leading to superior performance in applications including differential privacy auditing and verification of machine unlearning.

Technology Category

Application Category

📝 Abstract
We propose a framework to construct practical kernel-based two-sample tests from the family of $f$-divergences. The test statistic is computed from the witness function of a regularized variational representation of the divergence, which we estimate using kernel methods. The proposed test is adaptive over hyperparameters such as the kernel bandwidth and the regularization parameter. We provide theoretical guarantees for statistical test power across our family of $f$-divergence estimates. While our test covers a variety of $f$-divergences, we bring particular focus to the Hockey-Stick divergence, motivated by its applications to differential privacy auditing and machine unlearning evaluation. For two-sample testing, experiments demonstrate that different $f$-divergences are sensitive to different localized differences, illustrating the importance of leveraging diverse statistics. For machine unlearning, we propose a relative test that distinguishes true unlearning failures from safe distributional variations.
Problem

Research questions and friction points this paper is trying to address.

two-sample testing
f-divergence
kernel methods
differential privacy
machine unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

f-divergence
kernel two-sample test
regularized variational representation
Hockey-Stick divergence
machine unlearning
🔎 Similar Papers