Expressive Score-Based Priors for Distribution Matching with Geometry-Preserving Regularization

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distribution matching (DM) is widely employed in fair classification and domain adaptation, yet existing approaches face key limitations: nonparametric methods suffer from poor scalability; adversarial methods are prone to instability and mode collapse; and likelihood-based methods are constrained by fixed-prior bias or training difficulties inherent in explicit density modeling (e.g., normalizing flows). This paper proposes an implicit DM framework that neither assumes a fixed prior nor requires explicit density estimation. Instead, it defines the prior via a learnable score function and, for the first time, introduces denoising score matching for DM prior learning—effectively decoupling score estimation from density modeling. We further incorporate geometric-preserving regularization to enhance stability and representation quality. Experiments demonstrate that our method significantly outperforms baselines—including VAEs and LSGMs—on fair classification, domain adaptation, and domain generalization tasks, achieving superior stability, computational efficiency, and generalization performance.

Technology Category

Application Category

📝 Abstract
Distribution matching (DM) is a versatile domain-invariant representation learning technique that has been applied to tasks such as fair classification, domain adaptation, and domain translation. Non-parametric DM methods struggle with scalability and adversarial DM approaches suffer from instability and mode collapse. While likelihood-based methods are a promising alternative, they often impose unnecessary biases through fixed priors or require explicit density models (e.g., flows) that can be challenging to train. We address this limitation by introducing a novel approach to training likelihood-based DM using expressive score-based prior distributions. Our key insight is that gradient-based DM training only requires the prior's score function -- not its density -- allowing us to train the prior via denoising score matching. This approach eliminates biases from fixed priors (e.g., in VAEs), enabling more effective use of geometry-preserving regularization, while avoiding the challenge of learning an explicit prior density model (e.g., a flow-based prior). Our method also demonstrates better stability and computational efficiency compared to other diffusion-based priors (e.g., LSGM). Furthermore, experiments demonstrate superior performance across multiple tasks, establishing our score-based method as a stable and effective approach to distribution matching. Source code available at https://github.com/inouye-lab/SAUB.
Problem

Research questions and friction points this paper is trying to address.

Addressing scalability and instability in distribution matching methods
Eliminating biases from fixed priors in likelihood-based approaches
Improving stability and efficiency in geometry-preserving regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expressive score-based prior distributions for DM
Geometry-preserving regularization enhances effectiveness
Denoising score matching avoids explicit density models
🔎 Similar Papers
No similar papers found.
Z
Ziyu Gong
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
J
Jim Lim
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
David I. Inouye
David I. Inouye
Assistant Professor, Purdue University
Machine LearningTrustworthy MLDistribution MatchingExplainable AI