π€ AI Summary
Existing video-to-audio synthesis methods face an inherent trade-off between generation quality and inference efficiency: flow-matching models based on instantaneous velocity estimation require multi-step iterative sampling, resulting in slow inference. To address this, we propose MeanFlowβa novel acceleration framework that replaces instantaneous velocity modeling with average velocity modeling, enabling high-fidelity single-step audio generation. We further introduce a scalar rescaling strategy to mitigate distortion induced by classifier-free guidance under single-step sampling. Additionally, we employ multimodal conditional joint training to unify video-to-audio and text-to-audio synthesis within a single model. Experiments demonstrate that MeanFlow achieves state-of-the-art perceptual quality (e.g., MCD, STOI) while accelerating inference by over 10Γ compared to conventional flow-matching and diffusion-based baselines. It consistently outperforms existing methods across both tasks, validating its effectiveness, efficiency, and cross-modal generalization capability.
π Abstract
A key challenge in synthesizing audios from silent videos is the inherent trade-off between synthesis quality and inference efficiency in existing methods. For instance, flow matching based models rely on modeling instantaneous velocity, inherently require an iterative sampling process, leading to slow inference speeds. To address this efficiency bottleneck, we introduce a MeanFlow-accelerated model that characterizes flow fields using average velocity, enabling one-step generation and thereby significantly accelerating multimodal video-to-audio (VTA) synthesis while preserving audio quality, semantic alignment, and temporal synchronization. Furthermore, a scalar rescaling mechanism is employed to balance conditional and unconditional predictions when classifier-free guidance (CFG) is applied, effectively mitigating CFG-induced distortions in one step generation. Since the audio synthesis network is jointly trained with multimodal conditions, we further evaluate it on text-to-audio (TTA) synthesis task. Experimental results demonstrate that incorporating MeanFlow into the network significantly improves inference speed without compromising perceptual quality on both VTA and TTA synthesis tasks.