MeanAudio: Fast and Faithful Text-to-Audio Generation with Mean Flows

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-audio (TTA) generation models—particularly diffusion and flow-based approaches—achieve high fidelity and strong controllability but suffer from slow inference, hindering real-world deployment. To address this, we propose MeanFlow: a novel TTA framework built upon the mean flow modeling paradigm and a Flux-inspired latent-space Transformer architecture. We introduce an “instantaneous-to-mean” curriculum learning scheme and a hybrid flow field training strategy, enabling, for the first time, classifier-free guidance sampling at zero computational overhead. Crucially, the model is trained to directly regress the mean velocity field—a principled objective grounded in continuous normalizing flows. Experiments demonstrate state-of-the-art performance under single-step generation: on an RTX 3090 GPU, MeanFlow achieves a real-time factor of 0.013 (i.e., 77 seconds of audio synthesized per second), accelerating typical diffusion models by 100×, while supporting high-fidelity, temporally coherent multi-step synthesis.

Technology Category

Application Category

📝 Abstract
Recent developments in diffusion- and flow- based models have significantly advanced Text-to-Audio Generation (TTA). While achieving great synthesis quality and controllability, current TTA systems still suffer from slow inference speed, which significantly limits their practical applicability. This paper presents MeanAudio, a novel MeanFlow-based model tailored for fast and faithful text-to-audio generation. Built on a Flux-style latent transformer, MeanAudio regresses the average velocity field during training, enabling fast generation by mapping directly from the start to the endpoint of the flow trajectory. By incorporating classifier-free guidance (CFG) into the training target, MeanAudio incurs no additional cost in the guided sampling process. To further stabilize training, we propose an instantaneous-to-mean curriculum with flow field mix-up, which encourages the model to first learn the foundational instantaneous dynamics, and then gradually adapt to mean flows. This strategy proves critical for enhancing training efficiency and generation quality. Experimental results demonstrate that MeanAudio achieves state-of-the-art performance in single-step audio generation. Specifically, it achieves a real time factor (RTF) of 0.013 on a single NVIDIA RTX 3090, yielding a 100x speedup over SOTA diffusion-based TTA systems. Moreover, MeanAudio also demonstrates strong performance in multi-step generation, enabling smooth and coherent transitions across successive synthesis steps.
Problem

Research questions and friction points this paper is trying to address.

Slow inference speed in text-to-audio generation systems
Need for fast and faithful audio synthesis
Challenges in training stability and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

MeanFlow-based model for fast TTA
Classifier-free guidance in training target
Instantaneous-to-mean curriculum strategy
🔎 Similar Papers
No similar papers found.
Xiquan Li
Xiquan Li
Shanghai Jiao Tong University
Audio UnderstandingAudio GenerationLarge Language Models
J
Junxi Liu
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Department of Computer Science, Shanghai Jiao Tong University, China
Yuzhe Liang
Yuzhe Liang
Shanghai Jiao Tong University
Deep learningMultimodal Learning
Zhikang Niu
Zhikang Niu
Shanghai Jiao Tong University
Speech Synthesis
W
Wenxi Chen
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Department of Computer Science, Shanghai Jiao Tong University, China
X
Xie Chen
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, Department of Computer Science, Shanghai Jiao Tong University, China