MeMo: Attentional Momentum for Real-time Audio-visual Speaker Extraction under Impaired Visual Conditions

πŸ“… 2025-07-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the severe performance degradation of audio-visual target speaker extraction (AV-TSE) systems under visual cue absence or severe degradation, this paper proposes MeMoβ€”the first framework to introduce an attention momentum mechanism for real-time AV-TSE. MeMo employs dual adaptive memory banks to explicitly model attention history, ensuring continuity in target speaker tracking. It integrates audio-visual feature fusion, lightweight temporal modeling, and online adaptive updates to guarantee real-time operation. Extensive experiments across diverse visual degradation scenarios demonstrate MeMo’s superiority over state-of-the-art methods, achieving an average SI-SNR improvement of β‰₯2 dB, confirming its robustness and effectiveness. Key contributions include: (1) the first application of attention momentum in AV-TSE; (2) a memory-augmented architecture specifically designed for real-time constraints; and (3) stable target speech extraction even in the complete absence of visual cues.

Technology Category

Application Category

πŸ“ Abstract
Audio-visual Target Speaker Extraction (AV-TSE) aims to isolate a target speaker's voice from multi-speaker environments by leveraging visual cues as guidance. However, the performance of AV-TSE systems heavily relies on the quality of these visual cues. In extreme scenarios where visual cues are missing or severely degraded, the system may fail to accurately extract the target speaker. In contrast, humans can maintain attention on a target speaker even in the absence of explicit auxiliary information. Motivated by such human cognitive ability, we propose a novel framework called MeMo, which incorporates two adaptive memory banks to store attention-related information. MeMo is specifically designed for real-time scenarios: once initial attention is established, the system maintains attentional momentum over time, even when visual cues become unavailable. We conduct comprehensive experiments to verify the effectiveness of MeMo. Experimental results demonstrate that our proposed framework achieves SI-SNR improvements of at least 2 dB over the corresponding baseline.
Problem

Research questions and friction points this paper is trying to address.

Enhances speaker extraction with impaired visual cues
Maintains attention without continuous visual input
Improves real-time audio-visual extraction performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses adaptive memory banks for attention storage
Maintains attentional momentum without visual cues
Achieves real-time speaker extraction efficiently
πŸ”Ž Similar Papers