TRADE: Transfer of Distributions between External Conditions with Normalizing Flows

📅 2024-10-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling data distributions dynamically modulated by external control parameters (e.g., temperature) remains challenging; existing approaches are constrained by fixed architectures or suffer from instability in energy-based training. Method: We propose a physics-inspired paradigm for parametric distribution learning, formulating the conditional distribution as a controlled boundary-value problem. Our core innovation is a gradient-driven continuous propagation mechanism—where gradients with respect to external parameters govern evolution—thereby decoupling model architecture from parameter dependence and circumventing instability inherent in energy-function optimization. Built upon normalizing flows, the method supports boundary-condition initialization via i.i.d. sampling or reverse KL divergence, enabling efficient distribution transfer across parameter conditions. Results: Evaluated on molecular conformation generation, Bayesian posterior estimation, and lattice-based physical modeling, our approach achieves significant improvements in accuracy and generalization over state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Modeling distributions that depend on external control parameters is a common scenario in diverse applications like molecular simulations, where system properties like temperature affect molecular configurations. Despite the relevance of these applications, existing solutions are unsatisfactory as they require severely restricted model architectures or rely on energy-based training, which is prone to instability. We introduce TRADE, which overcomes these limitations by formulating the learning process as a boundary value problem. By initially training the model for a specific condition using either i.i.d.~samples or backward KL training, we establish a boundary distribution. We then propagate this information across other conditions using the gradient of the unnormalized density with respect to the external parameter. This formulation, akin to the principles of physics-informed neural networks, allows us to efficiently learn parameter-dependent distributions without restrictive assumptions. Experimentally, we demonstrate that TRADE achieves excellent results in a wide range of applications, ranging from Bayesian inference and molecular simulations to physical lattice models.
Problem

Research questions and friction points this paper is trying to address.

Modeling distributions dependent on external control parameters
Overcoming limitations of restricted model architectures
Efficient learning of parameter-dependent distributions without restrictive assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses normalizing flows for distribution transfer
Formulates learning as boundary value problem
Applies physics-informed neural network principles
🔎 Similar Papers
No similar papers found.
S
Stefan Wahl
Computer Vision and Learning Lab, Heidelberg University
A
Armand Rousselot
Computer Vision and Learning Lab, Heidelberg University
Felix Draxler
Felix Draxler
University of California, Irvine
Machine LearningGenerative ModelingNormalizing Flows
U
Ullrich Kothe
Computer Vision and Learning Lab, Heidelberg University