🤖 AI Summary
This work addresses the Partial Distribution Matching (PDM) problem—robustly aligning salient subsets of two probability distributions without requiring full distributional alignment. To formalize PDM, we establish, for the first time, the Kantorovich–Rubinstein duality theory for the partial Wasserstein-1 distance. Leveraging this theoretical foundation, we propose PWAN, an adversarial network framework that enables efficient, differentiable partial matching via gradient-based optimization. Our method integrates partial Wasserstein metrics, adversarial training, and bilevel optimization. Evaluated on 3D point-set registration and partial-domain adaptation, PWAN achieves state-of-the-art or competitive performance, significantly improving matching robustness and generalization. It overcomes the strong assumption of complete distribution alignment inherent in conventional optimal transport–based methods.
📝 Abstract
This paper studies the problem of distribution matching (DM), which is a fundamental machine learning problem seeking to robustly align two probability distributions. Our approach is established on a relaxed formulation, called partial distribution matching (PDM), which seeks to match a fraction of the distributions instead of matching them completely. We theoretically derive the Kantorovich-Rubinstein duality for the partial Wasserstain-1 (PW) discrepancy, and develop a partial Wasserstein adversarial network (PWAN) that efficiently approximates the PW discrepancy based on this dual form. Partial matching can then be achieved by optimizing the network using gradient descent. Two practical tasks, point set registration and partial domain adaptation are investigated, where the goals are to partially match distributions in 3D space and high-dimensional feature space respectively. The experiment results confirm that the proposed PWAN effectively produces highly robust matching results, performing better or on par with the state-of-the-art methods.