π€ AI Summary
This study addresses the critical challenge of estimating class priors from unlabeled dataβa fundamental problem in weakly supervised learning scenarios such as positive-unlabeled (PU) learning, label noise learning, and domain adaptation. By introducing a conditional independence assumption on class labels, the work overcomes the traditional irreducibility constraint, thereby enabling identifiability of mixture proportions under broader conditions. Building on this insight, the authors develop a method-of-moments estimator for class priors. Additionally, they propose a kernel-based hypothesis test to validate the conditional independence assumption, with potential extensions to causal discovery and fairness assessment. Theoretical analysis and empirical experiments demonstrate that the proposed estimator outperforms existing approaches, and the kernel-based test effectively controls both Type I and Type II errors.
π Abstract
Mixture proportion estimation (MPE) aims to estimate class priors from unlabeled data. This task is a critical component in weakly supervised learning, such as PU learning, learning with label noise, and domain adaptation. Existing MPE methods rely on the \textit{irreducibility} assumption or its variant for identifiability. In this paper, we propose novel assumptions based on conditional independence (CI) given the class label, which ensure identifiability even when irreducibility does not hold. We develop method of moments estimators under these assumptions and analyze their asymptotic properties. Furthermore, we present weakly-supervised kernel tests to validate the CI assumptions, which are of independent interest in applications such as causal discovery and fairness evaluation. Empirically, we demonstrate the improved performance of our estimators compared with existing methods and that our tests successfully control both type I and type II errors.\label{key}