🤖 AI Summary
This work addresses the challenge that MeanFlow struggles to balance generation quality and diversity in multimodal settings under a single function evaluation (1-NFE). To overcome this limitation, the authors propose a novel 1-NFE generative framework that first models a coarse-grained transport via a neural network–parameterized mean velocity field and then refines outputs through a tailored noise injection mechanism to enhance sample fidelity. A new loss function is introduced that simultaneously minimizes the Wasserstein distance between probability paths and maximizes sample likelihood. The method achieves near state-of-the-art performance across diverse tasks—including text-to-image synthesis, context-to-molecule generation, and time-series modeling—using only one NFE, with computational overhead comparable to the original MeanFlow.
📝 Abstract
Mean flow (MeanFlow) enables efficient, high-fidelity image generation, yet its single-function evaluation (1-NFE) generation often cannot yield compelling results. We address this issue by introducing RMFlow, an efficient multimodal generative model that integrates a coarse 1-NFE MeanFlow transport with a subsequent tailored noise-injection refinement step. RMFlow approximates the average velocity of the flow path using a neural network trained with a new loss function that balances minimizing the Wasserstein distance between probability paths and maximizing sample likelihood. RMFlow achieves near state-of-the-art results on text-to-image, context-to-molecule, and time-series generation using only 1-NFE, at a computational cost comparable to the baseline MeanFlows.