Taming Text-to-Sounding Video Generation via Advanced Modality Condition and Interaction

πŸ“… 2025-10-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Text-to-sound-video (T2SV) generation faces two key challenges: modality interference and ill-defined cross-modal alignment mechanisms. To address these, we propose the Hierarchical Visual-Grounded Captioning (HVGC) framework, which decouples audio-visual conditional inputs, and BridgeDiTβ€”a diffusion-based Transformer with a dual-tower architecture. BridgeDiT introduces Dual Cross-Attention, a novel mechanism enabling bidirectional, symmetric alignment of both semantic and temporal representations across modalities. This design effectively mitigates text-induced audio-visual modality confusion and enhances synchronized generation quality. Our method achieves state-of-the-art performance on three benchmark datasets, outperforming prior work significantly in both automatic metrics and human evaluation. Ablation studies validate the efficacy of each component. Code and models will be publicly released.

Technology Category

Application Category

πŸ“ Abstract
This study focuses on a challenging yet promising task, Text-to-Sounding-Video (T2SV) generation, which aims to generate a video with synchronized audio from text conditions, meanwhile ensuring both modalities are aligned with text. Despite progress in joint audio-video training, two critical challenges still remain unaddressed: (1) a single, shared text caption where the text for video is equal to the text for audio often creates modal interference, confusing the pretrained backbones, and (2) the optimal mechanism for cross-modal feature interaction remains unclear. To address these challenges, we first propose the Hierarchical Visual-Grounded Captioning (HVGC) framework that generates pairs of disentangled captions, a video caption, and an audio caption, eliminating interference at the conditioning stage. Based on HVGC, we further introduce BridgeDiT, a novel dual-tower diffusion transformer, which employs a Dual CrossAttention (DCA) mechanism that acts as a robust ``bridge" to enable a symmetric, bidirectional exchange of information, achieving both semantic and temporal synchronization. Extensive experiments on three benchmark datasets, supported by human evaluations, demonstrate that our method achieves state-of-the-art results on most metrics. Comprehensive ablation studies further validate the effectiveness of our contributions, offering key insights for the future T2SV task. All the codes and checkpoints will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

Generating synchronized audio-video from text descriptions
Resolving modal interference in text-to-sounding-video generation
Establishing optimal cross-modal interaction for temporal synchronization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Visual-Grounded Captioning generates disentangled video and audio captions
BridgeDiT dual-tower diffusion transformer enables bidirectional cross-modal interaction
Dual CrossAttention mechanism achieves semantic and temporal synchronization
πŸ”Ž Similar Papers
No similar papers found.