🤖 AI Summary
To address distribution shift in unsupervised domain adaptation, this paper proposes a manifold learning approach based on geometric moment alignment. We introduce Siegel embedding for the first time to jointly encode the first-order (mean) and second-order (covariance) statistics of source and target domains into a single Symmetric Positive-Definite (SPD) matrix. Alignment is then performed on the SPD manifold using the Riemannian distance, enabling high-fidelity cross-domain matching. We theoretically prove that the Riemannian distance bounds the target-domain generalization error. Integrating Siegel embedding, SPD manifold optimization, and an unsupervised training framework, our method achieves significant improvements over state-of-the-art approaches on image denoising and classification tasks, effectively mitigating distribution shift. The implementation is fully open-sourced to ensure reproducibility.
📝 Abstract
We address the problem of distribution shift in unsupervised domain adaptation with a moment-matching approach. Existing methods typically align low-order statistical moments of the source and target distributions in an embedding space using ad-hoc similarity measures. We propose a principled alternative that instead leverages the intrinsic geometry of these distributions by adopting a Riemannian distance for this alignment. Our key novelty lies in expressing the first- and second-order moments as a single symmetric positive definite (SPD) matrix through Siegel embeddings. This enables simultaneous adaptation of both moments using the natural geometric distance on the shared manifold of SPD matrices, preserving the mean and covariance structure of the source and target distributions and yielding a more faithful metric for cross-domain comparison. We connect the Riemannian manifold distance to the target-domain error bound, and validate the method on image denoising and image classification benchmarks. Our code is publicly available at https://github.com/shayangharib/GeoAdapt.