MeanFlow-Accelerated Multimodal Video-to-Audio Synthesis via One-Step Generation

πŸ“… 2025-09-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video-to-audio synthesis methods face an inherent trade-off between generation quality and inference efficiency: flow-matching models based on instantaneous velocity estimation require multi-step iterative sampling, resulting in slow inference. To address this, we propose MeanFlowβ€”a novel acceleration framework that replaces instantaneous velocity modeling with average velocity modeling, enabling high-fidelity single-step audio generation. We further introduce a scalar rescaling strategy to mitigate distortion induced by classifier-free guidance under single-step sampling. Additionally, we employ multimodal conditional joint training to unify video-to-audio and text-to-audio synthesis within a single model. Experiments demonstrate that MeanFlow achieves state-of-the-art perceptual quality (e.g., MCD, STOI) while accelerating inference by over 10Γ— compared to conventional flow-matching and diffusion-based baselines. It consistently outperforms existing methods across both tasks, validating its effectiveness, efficiency, and cross-modal generalization capability.

Technology Category

Application Category

πŸ“ Abstract
A key challenge in synthesizing audios from silent videos is the inherent trade-off between synthesis quality and inference efficiency in existing methods. For instance, flow matching based models rely on modeling instantaneous velocity, inherently require an iterative sampling process, leading to slow inference speeds. To address this efficiency bottleneck, we introduce a MeanFlow-accelerated model that characterizes flow fields using average velocity, enabling one-step generation and thereby significantly accelerating multimodal video-to-audio (VTA) synthesis while preserving audio quality, semantic alignment, and temporal synchronization. Furthermore, a scalar rescaling mechanism is employed to balance conditional and unconditional predictions when classifier-free guidance (CFG) is applied, effectively mitigating CFG-induced distortions in one step generation. Since the audio synthesis network is jointly trained with multimodal conditions, we further evaluate it on text-to-audio (TTA) synthesis task. Experimental results demonstrate that incorporating MeanFlow into the network significantly improves inference speed without compromising perceptual quality on both VTA and TTA synthesis tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses slow inference in video-to-audio synthesis
Improves efficiency via one-step generation method
Maintains audio quality and synchronization
Innovation

Methods, ideas, or system contributions that make the work stand out.

MeanFlow-accelerated one-step audio synthesis
Scalar rescaling balances conditional predictions
Joint multimodal training enables cross-task application
X
Xiaoran Yang
School of Electronic Information, Wuhan University, Wuhan, China
J
Jianxuan Yang
MiLM Plus, Xiaomi Inc., Wuhan, China
X
Xinyue Guo
MiLM Plus, Xiaomi Inc., Wuhan, China
H
Haoyu Wang
School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics, Chengdu, China
Ningning Pan
Ningning Pan
Assistant Professor of Southwestern University of Finance and Economics
Speech Enhancementbinaural hearingdeep learning
Gongping Huang
Gongping Huang
Professor, Wuhan University, Wuhan, China
Acoustic Signal ProcessingMicrophone ArraysSpeech EnhancementNoise Reduction