A Unified Data Representation Learning for Non-parametric Two-sample Testing

πŸ“… 2024-11-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In nonparametric two-sample testing, conventional representation learning leverages only the training set, neglecting geometric structure in the test setβ€”thereby limiting statistical power. To address this, we propose RL-TST: a framework that jointly performs self-supervised manifold representation learning and discriminative representation optimization over the full dataset (excluding sample indices), enabling unsupervised exploitation of test-set geometry while rigorously controlling Type-I error. RL-TST synergistically integrates manifold-aware representations, nonparametric test statistics (e.g., MMD, HSIC), and discriminative modeling to co-optimize representation learning and hypothesis testing objectives. Evaluated across diverse benchmarks, RL-TST achieves an average 12.7% improvement in test power, demonstrates robustness to high-dimensional sparse distributions and small-sample regimes, and significantly outperforms state-of-the-art methods.

Technology Category

Application Category

πŸ“ Abstract
Learning effective data representations has been crucial in non-parametric two-sample testing. Common approaches will first split data into training and test sets and then learn data representations purely on the training set. However, recent theoretical studies have shown that, as long as the sample indexes are not used during the learning process, the whole data can be used to learn data representations, meanwhile ensuring control of Type-I errors. The above fact motivates us to use the test set (but without sample indexes) to facilitate the data representation learning in the testing. To this end, we propose a representation-learning two-sample testing (RL-TST) framework. RL-TST first performs purely self-supervised representation learning on the entire dataset to capture inherent representations (IRs) that reflect the underlying data manifold. A discriminative model is then trained on these IRs to learn discriminative representations (DRs), enabling the framework to leverage both the rich structural information from IRs and the discriminative power of DRs. Extensive experiments demonstrate that RL-TST outperforms representative approaches by simultaneously using data manifold information in the test set and enhancing test power via finding the DRs with the training set.
Problem

Research questions and friction points this paper is trying to address.

Learning unified data representations for non-parametric two-sample testing
Ensuring Type-I error control while using entire dataset for representation learning
Combining self-supervised and discriminative learning to enhance test power
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses entire dataset for representation learning
Combines self-supervised and discriminative learning
Leverages test set data without sample indexes
πŸ”Ž Similar Papers
No similar papers found.