VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of modeling heterogeneous conditions (e.g., ambiguous video versus deterministic text) and mitigating multi-stage training complexity in audio generation. To this end, we propose VSSFlow—a unified framework for both video-to-sound (V2S) and visual-text-to-speech (VisualTTS). Methodologically, VSSFlow introduces a conditional aggregation mechanism: cross-attention models video–audio alignment, while self-attention precisely handles text transcriptions; crucially, we identify shared audio priors across tasks that significantly improve generation quality and training stability. Furthermore, VSSFlow adopts an end-to-end flow-matching architecture with classifier-free guidance. Extensive experiments demonstrate that VSSFlow consistently outperforms task-specific state-of-the-art models on standard V2S and VisualTTS benchmarks, validating the effectiveness, generalizability, and training simplicity of a unified generative paradigm.

Technology Category

Application Category

📝 Abstract
Video-conditioned sound and speech generation, encompassing video-to-sound (V2S) and visual text-to-speech (VisualTTS) tasks, are conventionally addressed as separate tasks, with limited exploration to unify them within a signle framework. Recent attempts to unify V2S and VisualTTS face challenges in handling distinct condition types (e.g., heterogeneous video and transcript conditions) and require complex training stages. Unifying these two tasks remains an open problem. To bridge this gap, we present VSSFlow, which seamlessly integrates both V2S and VisualTTS tasks into a unified flow-matching framework. VSSFlow uses a novel condition aggregation mechanism to handle distinct input signals. We find that cross-attention and self-attention layer exhibit different inductive biases in the process of introducing condition. Therefore, VSSFlow leverages these inductive biases to effectively handle different representations: cross-attention for ambiguous video conditions and self-attention for more deterministic speech transcripts. Furthermore, contrary to the prevailing belief that joint training on the two tasks requires complex training strategies and may degrade performance, we find that VSSFlow benefits from the end-to-end joint learning process for sound and speech generation without extra designs on training stages. Detailed analysis attributes it to the learned general audio prior shared between tasks, which accelerates convergence, enhances conditional generation, and stabilizes the classifier-free guidance process. Extensive experiments demonstrate that VSSFlow surpasses the state-of-the-art domain-specific baselines on both V2S and VisualTTS benchmarks, underscoring the critical potential of unified generative models.
Problem

Research questions and friction points this paper is trying to address.

Unifying video-to-sound and visual text-to-speech generation in one framework
Handling distinct input conditions like video and transcript signals effectively
Overcoming complex training stages needed for joint sound and speech generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies video-to-sound and visual text-to-speech via flow-matching
Uses cross-attention and self-attention for distinct conditions
Employs end-to-end joint learning with shared audio prior
🔎 Similar Papers
No similar papers found.
X
Xin Cheng
Renmin University of China
Y
Yuyue Wang
Renmin University of China
Xihua Wang
Xihua Wang
Renmin University of China
Y
Yihan Wu
Renmin University of China
K
Kaisi Guan
Renmin University of China
Y
Yijing Chen
Renmin University of China
P
Peng Zhang
Apple
Xiaojiang Liu
Xiaojiang Liu
Apple
Meng Cao
Meng Cao
Postdoc, Carnegie Mellon University
Psychology
Ruihua Song
Ruihua Song
Renmin University of China
AI based creationmulti-modaltiy chitchatnatural language understandinginformation retrievalinformation extraction