PrismAudio: Decomposed Chain-of-Thoughts and Multi-dimensional Rewards for Video-to-Audio Generation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video-to-audio (V2A) generation requires joint optimization across four perceptual dimensions: semantic consistency, audio-visual temporal alignment, audio fidelity and aesthetics, and spatial localization. However, existing methods suffer from performance bottlenecks due to coupling all objectives into a single loss function and lack alignment with human perceptual preferences. This work introduces the first reinforcement learning framework for V2A, featuring a four-dimensional decoupled chain-of-thought planning module—explicitly modeling semantic, temporal, aesthetic, and spatial aspects—and a corresponding multi-dimensional reward function. We further enhance training efficiency via Fast-GRPO and hybrid ODE-SDE sampling. To enable fine-grained evaluation, we introduce AudioCanvas, a novel benchmark. On both VGGSound and AudioCanvas, our method achieves state-of-the-art performance across all four dimensions, significantly outperforming prior approaches while offering improved interpretability, generalization, and practical applicability.

Technology Category

Application Category

📝 Abstract
Video-to-Audio (V2A) generation requires balancing four critical perceptual dimensions: semantic consistency, audio-visual temporal synchrony, aesthetic quality, and spatial accuracy; yet existing methods suffer from objective entanglement that conflates competing goals in single loss functions and lack human preference alignment. We introduce PrismAudio, the first framework to integrate Reinforcement Learning into V2A generation with specialized Chain-of-Thought (CoT) planning. Our approach decomposes monolithic reasoning into four specialized CoT modules (Semantic, Temporal, Aesthetic, and Spatial CoT), each paired with targeted reward functions. This CoT-reward correspondence enables multidimensional RL optimization that guides the model to jointly generate better reasoning across all perspectives, solving the objective entanglement problem while preserving interpretability. To make this optimization computationally practical, we propose Fast-GRPO, which employs hybrid ODE-SDE sampling that dramatically reduces the training overhead compared to existing GRPO implementations. We also introduce AudioCanvas, a rigorous benchmark that is more distributionally balanced and covers more realistically diverse and challenging scenarios than existing datasets, with 300 single-event classes and 501 multi-event samples. Experimental results demonstrate that PrismAudio achieves state-of-the-art performance across all four perceptual dimensions on both the in-domain VGGSound test set and out-of-domain AudioCanvas benchmark. The project page is available at https://PrismAudio-Project.github.io.
Problem

Research questions and friction points this paper is trying to address.

Balancing four perceptual dimensions in video-to-audio generation
Solving objective entanglement in single loss functions
Aligning generated audio with human preference requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposed Chain-of-Thought modules for specialized reasoning
Multidimensional RL optimization with targeted reward functions
Fast-GRPO with hybrid ODE-SDE sampling reduces training overhead
🔎 Similar Papers
No similar papers found.
H
Huadai Liu
Hong Kong University of Science and Technology (HKUST)
K
Kaicheng Luo
Tongyi Fun Team, Alibaba Group
W
Wen Wang
Tongyi Fun Team, Alibaba Group
Q
Qian Chen
Tongyi Fun Team, Alibaba Group
Peiwen Sun
Peiwen Sun
Multimedia lab, The Chinese University of Hong Kong
multimodal learning
Rongjie Huang
Rongjie Huang
FAIR, Zhejiang University
Multimedia ComputingSpeechNatural Language Processing
Xiangang Li
Xiangang Li
Unknown affiliation
speech recognitionnatural language processing
J
Jieping Ye
Tongyi Fun Team, Alibaba Group
W
Wei Xue
Hong Kong University of Science and Technology (HKUST)