🤖 AI Summary
Video-to-audio (V2A) generation requires joint optimization across four perceptual dimensions: semantic consistency, audio-visual temporal alignment, audio fidelity and aesthetics, and spatial localization. However, existing methods suffer from performance bottlenecks due to coupling all objectives into a single loss function and lack alignment with human perceptual preferences. This work introduces the first reinforcement learning framework for V2A, featuring a four-dimensional decoupled chain-of-thought planning module—explicitly modeling semantic, temporal, aesthetic, and spatial aspects—and a corresponding multi-dimensional reward function. We further enhance training efficiency via Fast-GRPO and hybrid ODE-SDE sampling. To enable fine-grained evaluation, we introduce AudioCanvas, a novel benchmark. On both VGGSound and AudioCanvas, our method achieves state-of-the-art performance across all four dimensions, significantly outperforming prior approaches while offering improved interpretability, generalization, and practical applicability.
📝 Abstract
Video-to-Audio (V2A) generation requires balancing four critical perceptual dimensions: semantic consistency, audio-visual temporal synchrony, aesthetic quality, and spatial accuracy; yet existing methods suffer from objective entanglement that conflates competing goals in single loss functions and lack human preference alignment. We introduce PrismAudio, the first framework to integrate Reinforcement Learning into V2A generation with specialized Chain-of-Thought (CoT) planning. Our approach decomposes monolithic reasoning into four specialized CoT modules (Semantic, Temporal, Aesthetic, and Spatial CoT), each paired with targeted reward functions. This CoT-reward correspondence enables multidimensional RL optimization that guides the model to jointly generate better reasoning across all perspectives, solving the objective entanglement problem while preserving interpretability. To make this optimization computationally practical, we propose Fast-GRPO, which employs hybrid ODE-SDE sampling that dramatically reduces the training overhead compared to existing GRPO implementations. We also introduce AudioCanvas, a rigorous benchmark that is more distributionally balanced and covers more realistically diverse and challenging scenarios than existing datasets, with 300 single-event classes and 501 multi-event samples. Experimental results demonstrate that PrismAudio achieves state-of-the-art performance across all four perceptual dimensions on both the in-domain VGGSound test set and out-of-domain AudioCanvas benchmark. The project page is available at https://PrismAudio-Project.github.io.