AC-Foley: Reference-Audio-Guided Video-to-Audio Synthesis with Acoustic Transfer

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video-to-audio generation methods rely on text prompts, which offer coarse semantic granularity and struggle to precisely describe acoustic details, thereby limiting the controllability of sound synthesis. To address this limitation, this work proposes AC-Foley, a novel model that, for the first time, introduces reference audio as an explicit conditioning signal in video-to-audio generation. By jointly modeling visual and reference audio signals, AC-Foley enables direct manipulation of timbre and fine-grained acoustic features, bypassing the semantic bottleneck inherent in text-based approaches. The method supports fine-grained sound generation, timbre transfer, and zero-shot audio synthesis, achieving state-of-the-art performance on Foley sound effects tasks. Notably, even in the absence of reference audio, AC-Foley matches or exceeds the performance of existing state-of-the-art methods, significantly enhancing both the quality and controllability of generated audio.

Technology Category

Application Category

📝 Abstract
Existing video-to-audio (V2A) generation methods predominantly rely on text prompts alongside visual information to synthesize audio. However, two critical bottlenecks persist: semantic granularity gaps in training data, such as conflating acoustically distinct sounds under coarse labels, and textual ambiguity in describing micro-acoustic features. These bottlenecks make it difficult to perform fine-grained sound synthesis using text-controlled modes. To address these limitations, we propose AC-Foley, an audio-conditioned V2A model that directly leverages reference audio to achieve precise and fine-grained control over generated sounds. This approach enables fine-grained sound synthesis, timbre transfer, zero-shot sound generation, and improved audio quality. By directly conditioning on audio signals, our approach bypasses the semantic ambiguities of text descriptions while enabling precise manipulation of acoustic attributes. Empirically, AC-Foley achieves state-of-the-art performance for Foley generation when conditioned on reference audio, while remaining competitive with state-of-the-art video-to-audio methods even without audio conditioning.
Problem

Research questions and friction points this paper is trying to address.

video-to-audio synthesis
semantic granularity
textual ambiguity
fine-grained sound synthesis
acoustic features
Innovation

Methods, ideas, or system contributions that make the work stand out.

audio-conditioned synthesis
video-to-audio generation
timbre transfer
fine-grained sound control
reference audio guidance