🤖 AI Summary
Existing text-to-audio (TTA) generation models—particularly diffusion and flow-based approaches—achieve high fidelity and strong controllability but suffer from slow inference, hindering real-world deployment. To address this, we propose MeanFlow: a novel TTA framework built upon the mean flow modeling paradigm and a Flux-inspired latent-space Transformer architecture. We introduce an “instantaneous-to-mean” curriculum learning scheme and a hybrid flow field training strategy, enabling, for the first time, classifier-free guidance sampling at zero computational overhead. Crucially, the model is trained to directly regress the mean velocity field—a principled objective grounded in continuous normalizing flows. Experiments demonstrate state-of-the-art performance under single-step generation: on an RTX 3090 GPU, MeanFlow achieves a real-time factor of 0.013 (i.e., 77 seconds of audio synthesized per second), accelerating typical diffusion models by 100×, while supporting high-fidelity, temporally coherent multi-step synthesis.
📝 Abstract
Recent developments in diffusion- and flow- based models have significantly advanced Text-to-Audio Generation (TTA). While achieving great synthesis quality and controllability, current TTA systems still suffer from slow inference speed, which significantly limits their practical applicability. This paper presents MeanAudio, a novel MeanFlow-based model tailored for fast and faithful text-to-audio generation. Built on a Flux-style latent transformer, MeanAudio regresses the average velocity field during training, enabling fast generation by mapping directly from the start to the endpoint of the flow trajectory. By incorporating classifier-free guidance (CFG) into the training target, MeanAudio incurs no additional cost in the guided sampling process. To further stabilize training, we propose an instantaneous-to-mean curriculum with flow field mix-up, which encourages the model to first learn the foundational instantaneous dynamics, and then gradually adapt to mean flows. This strategy proves critical for enhancing training efficiency and generation quality. Experimental results demonstrate that MeanAudio achieves state-of-the-art performance in single-step audio generation. Specifically, it achieves a real time factor (RTF) of 0.013 on a single NVIDIA RTX 3090, yielding a 100x speedup over SOTA diffusion-based TTA systems. Moreover, MeanAudio also demonstrates strong performance in multi-step generation, enabling smooth and coherent transitions across successive synthesis steps.