π€ AI Summary
This work addresses the challenging problem of covariate shift adaptation when the density ratio is unboundedβa setting where existing methods often rely on unrealistic assumptions that the density ratio is either bounded or exactly known. To overcome this limitation, the authors propose a novel three-step estimation procedure: first estimating a relative density ratio, then applying truncation to control its unboundedness, and finally transforming it into a standard density ratio to serve as importance weights in regression. This approach is the first to directly tackle unbounded density ratios, establishing non-asymptotic convergence guarantees that achieve minimax-optimal or near-optimal rates for both the density ratio and the regression function. The method significantly enhances both the theoretical rigor and empirical performance of covariate shift adaptation under realistic conditions.
π Abstract
This paper focuses on the problem of unbounded density ratio estimation -- an understudied yet critical challenge in statistical learning -- and its application to covariate shift adaptation. Much of the existing literature assumes that the density ratio is either uniformly bounded or unbounded but known exactly. These conditions are often violated in practice, creating a gap between theoretical guarantees and real-world applicability. In contrast, this work directly addresses unbounded density ratios and integrates them into importance weighting for effective covariate shift adaptation. We propose a three-step estimation method that leverages unlabeled data from both the source and target distributions: (1) estimating a relative density ratio; (2) applying a truncation operation to control its unboundedness; and (3) transforming the truncated estimate back into the standard density ratio. The estimated density ratio is then employed as importance weights for regression under covariate shift. We establish rigorous, non-asymptotic convergence guarantees for both the proposed density ratio estimator and the resulting regression function estimator, demonstrating optimal or near-optimal convergence rates. Our findings offer new theoretical insights into density ratio estimation and learning under covariate shift, extending classical learning theory to more practical and challenging scenarios.