DGFNet: End-to-End Audio-Visual Source Separation Based on Dynamic Gating Fusion

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-visual source separation methods face two key bottlenecks: (1) fusion at bottleneck layers suffers from modality-induced information loss due to inherent audio-visual discrepancies; (2) decoder-driven cross-modal interaction is constrained by insufficient encoder-level multimodal representation learning. To address these, we propose an end-to-end dynamic gating fusion framework. Its core contributions are: (1) a novel learnable dynamic gating mechanism that adaptively modulates fusion strength between audio and visual features, mitigating modality imbalance; (2) an audio-specific attention module to enhance acoustic discriminability; and (3) a multimodal feature alignment strategy to improve cross-modal collaborative modeling. Evaluated on two mainstream benchmarks, our method achieves significant improvements over state-of-the-art approaches in SI-SNRi, with average gains of 1.8–2.3 dB.

Technology Category

Application Category

📝 Abstract
Current Audio-Visual Source Separation methods primarily adopt two design strategies. The first strategy involves fusing audio and visual features at the bottleneck layer of the encoder, followed by processing the fused features through the decoder. However, when there is a significant disparity between the two modalities, this approach may lead to the loss of critical information. The second strategy avoids direct fusion and instead relies on the decoder to handle the interaction between audio and visual features. Nonetheless, if the encoder fails to integrate information across modalities adequately, the decoder may be unable to effectively capture the complex relationships between them. To address these issues, this paper proposes a dynamic fusion method based on a gating mechanism that dynamically adjusts the modality fusion degree. This approach mitigates the limitations of solely relying on the decoder and facilitates efficient collaboration between audio and visual features. Additionally, an audio attention module is introduced to enhance the expressive capacity of audio features, thereby further improving model performance. Experimental results demonstrate that our method achieves significant performance improvements on two benchmark datasets, validating its effectiveness and advantages in Audio-Visual Source Separation tasks.
Problem

Research questions and friction points this paper is trying to address.

Dynamic gating fusion for audio-visual source separation
Addressing modality disparity in feature fusion
Enhancing audio feature expression with attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic gating mechanism for modality fusion
Audio attention module enhances feature expression
End-to-end audio-visual source separation model
🔎 Similar Papers
No similar papers found.