Audio-Visual Speech Separation via Bottleneck Iterative Network

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low modality-fusion efficiency and the trade-off between model capacity and computational cost in audio-visual speech separation, this paper proposes an iterative representation optimization framework based on a fusion token bottleneck. Methodologically, it employs modality-specific deep networks to extract unimodal features, and introduces a lightweight cross-modal fusion module that imposes a learnable bottleneck constraint on cross-modal representations via fusion tokens, followed by multiple rounds of iterative refinement for efficient information integration. The key contributions are: (1) significantly enhanced model expressiveness and generalization without substantial parameter increase; and (2) state-of-the-art performance on NTCD-TIMIT and LRS3+WHAM!, achieving superior SI-SDR improvement (SI-SDRi); additionally, both training time and GPU inference latency are reduced by over 50%.

Technology Category

Application Category

📝 Abstract
Integration of information from non-auditory cues can significantly improve the performance of speech-separation models. Often such models use deep modality-specific networks to obtain unimodal features, and risk being too costly or lightweight but lacking capacity. In this work, we present an iterative representation refinement approach called Bottleneck Iterative Network (BIN), a technique that repeatedly progresses through a lightweight fusion block, while bottlenecking fusion representations by fusion tokens. This helps improve the capacity of the model, while avoiding major increase in model size and balancing between the model performance and training cost. We test BIN on challenging noisy audio-visual speech separation tasks, and show that our approach consistently outperforms state-of-the-art benchmark models with respect to SI-SDRi on NTCD-TIMIT and LRS3+WHAM! datasets, while simultaneously achieving a reduction of more than 50% in training and GPU inference time across nearly all settings.
Problem

Research questions and friction points this paper is trying to address.

Improves speech-separation using non-auditory cues efficiently
Balances model performance and training cost effectively
Outperforms benchmarks in noisy audio-visual speech separation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative representation refinement via BIN
Lightweight fusion block with bottleneck tokens
Balances performance and cost effectively
🔎 Similar Papers
No similar papers found.
S
Sidong Zhang
Manning College of Information & Computer Sciences, University of Massachusetts Amherst, Amherst, U.S.
Shiv Shankar
Shiv Shankar
UMass
Causal inferenceProbabilistic Modelsmachine learninghealthcareRL
Trang Nguyen
Trang Nguyen
Technical Staff, MIT Lincoln Laboratory
Natural Language ProcessingLarge Language ModelsExplainable AICyber Analytics
Andrea Fanelli
Andrea Fanelli
Principal Researcher at Dolby Laboratories
Multimodal AIAudio AIMachine PerceptionBiomedical Signal ProcessingWearable Devices
M
Madalina Fiterau
Manning College of Information & Computer Sciences, University of Massachusetts Amherst, Amherst, U.S.