🤖 AI Summary
This work investigates the design of source distributions in flow-matching generative models: is the Gaussian distribution optimal? Through 2D geometric modeling and high-dimensional training dynamics analysis, we systematically uncover, for the first time, an intrinsic trade-off among density approximation, direction alignment, and norm mismatch in flow matching, and elucidate the geometric origin of Gaussian robustness. Building on this insight, we propose a norm-alignment loss and devise a plug-and-play, direction-aware pruning sampling strategy that requires no retraining. Evaluated across multiple image generation benchmarks, our approach significantly improves both FID (up to 12% reduction) and sampling efficiency, while maintaining full compatibility with all existing Gaussian-source flow-matching models.
📝 Abstract
Flow matching has emerged as a powerful generative modeling approach with flexible choices of source distribution. While Gaussian distributions are commonly used, the potential for better alternatives in high-dimensional data generation remains largely unexplored. In this paper, we propose a novel 2D simulation that captures high-dimensional geometric properties in an interpretable 2D setting, enabling us to analyze the learning dynamics of flow matching during training. Based on this analysis, we derive several key insights about flow matching behavior: (1) density approximation can paradoxically degrade performance due to mode discrepancy, (2) directional alignment suffers from path entanglement when overly concentrated, (3) Gaussian's omnidirectional coverage ensures robust learning, and (4) norm misalignment incurs substantial learning costs. Building on these insights, we propose a practical framework that combines norm-aligned training with directionally-pruned sampling. This approach maintains the robust omnidirectional supervision essential for stable flow learning, while eliminating initializations in data-sparse regions during inference. Importantly, our pruning strategy can be applied to any flow matching model trained with a Gaussian source, providing immediate performance gains without the need for retraining. Empirical evaluations demonstrate consistent improvements in both generation quality and sampling efficiency. Our findings provide practical insights and guidelines for source distribution design and introduce a readily applicable technique for improving existing flow matching models. Our code is available at https://github.com/kwanseokk/SourceFM.