🤖 AI Summary
To address the labor-intensive workflows and limited creative control faced by sound designers in video foley synthesis, this paper proposes a two-stage audiovisual synchronization generation framework. First, RMS-Mapper extracts audio envelopes to provide temporal priors; second, a Stable-Foley diffusion model is built upon the Stable Audio Open architecture. Our key contributions are: (1) the first envelope-driven ControlNet mechanism for explicit temporal conditioning, and (2) cross-attention-based semantic control that integrates designer-selected sound representations, enabling fine-grained, editable alignment of both semantics and timing. Evaluated on the Greatest Hits benchmark, our method achieves state-of-the-art synchronization quality. On our newly curated Walking The Maps game dataset—featuring spatially annotated walking sequences—it significantly improves the fidelity and position-awareness of generated foley sounds, while supporting real-time interactive design.
📝 Abstract
Sound designers and Foley artists usually sonorize a scene, such as from a movie or video game, by manually annotating and sonorizing each action of interest in the video. In our case, the intent is to leave full creative control to sound designers with a tool that allows them to bypass the more repetitive parts of their work, thus being able to focus on the creative aspects of sound production. We achieve this presenting Stable-V2A, a two-stage model consisting of: an RMS-Mapper that estimates an envelope representative of the audio characteristics associated with the input video; and Stable-Foley, a diffusion model based on Stable Audio Open that generates audio semantically and temporally aligned with the target video. Temporal alignment is guaranteed by the use of the envelope as a ControlNet input, while semantic alignment is achieved through the use of sound representations chosen by the designer as cross-attention conditioning of the diffusion process. We train and test our model on Greatest Hits, a dataset commonly used to evaluate V2A models. In addition, to test our model on a case study of interest, we introduce Walking The Maps, a dataset of videos extracted from video games depicting animated characters walking in different locations. Samples and code available on our demo page at https://ispamm.github.io/Stable-V2A.