🤖 AI Summary
This work addresses the challenge that single-step flow models, lacking explicit sampling trajectories, struggle to effectively incorporate external constraints in conditional generation and inverse problems. The authors propose the Variational Flow Mapping (VFM) framework, which reformulates conditional generation as learning a condition-adapted initial noise distribution. By jointly training a noise adapter and the flow mapping, VFM enables efficient single-step conditional sampling. Grounded in variational inference, the method introduces a unified optimization objective that substantially enhances both conditional consistency and posterior expressiveness. Experiments demonstrate that VFM achieves high-quality single- or few-step generation across diverse image inverse problems, matching the fidelity of iterative models on ImageNet while accelerating sampling by several orders of magnitude.
📝 Abstract
Flow maps enable high-quality image generation in a single forward pass. However, unlike iterative diffusion models, their lack of an explicit sampling trajectory impedes incorporating external constraints for conditional generation and solving inverse problems. We put forth Variational Flow Maps, a framework for conditional sampling that shifts the perspective of conditioning from"guiding a sampling path", to that of"learning the proper initial noise". Specifically, given an observation, we seek to learn a noise adapter model that outputs a noise distribution, so that after mapping to the data space via flow map, the samples respect the observation and data prior. To this end, we develop a principled variational objective that jointly trains the noise adapter and the flow map, improving noise-data alignment, such that sampling from complex data posterior is achieved with a simple adapter. Experiments on various inverse problems show that VFMs produce well-calibrated conditional samples in a single (or few) steps. For ImageNet, VFM attains competitive fidelity while accelerating the sampling by orders of magnitude compared to alternative iterative diffusion/flow models. Code is available at https://github.com/abbasmammadov/VFM