🤖 AI Summary
To address the low modality-fusion efficiency and the trade-off between model capacity and computational cost in audio-visual speech separation, this paper proposes an iterative representation optimization framework based on a fusion token bottleneck. Methodologically, it employs modality-specific deep networks to extract unimodal features, and introduces a lightweight cross-modal fusion module that imposes a learnable bottleneck constraint on cross-modal representations via fusion tokens, followed by multiple rounds of iterative refinement for efficient information integration. The key contributions are: (1) significantly enhanced model expressiveness and generalization without substantial parameter increase; and (2) state-of-the-art performance on NTCD-TIMIT and LRS3+WHAM!, achieving superior SI-SDR improvement (SI-SDRi); additionally, both training time and GPU inference latency are reduced by over 50%.
📝 Abstract
Integration of information from non-auditory cues can significantly improve the performance of speech-separation models. Often such models use deep modality-specific networks to obtain unimodal features, and risk being too costly or lightweight but lacking capacity. In this work, we present an iterative representation refinement approach called Bottleneck Iterative Network (BIN), a technique that repeatedly progresses through a lightweight fusion block, while bottlenecking fusion representations by fusion tokens. This helps improve the capacity of the model, while avoiding major increase in model size and balancing between the model performance and training cost. We test BIN on challenging noisy audio-visual speech separation tasks, and show that our approach consistently outperforms state-of-the-art benchmark models with respect to SI-SDRi on NTCD-TIMIT and LRS3+WHAM! datasets, while simultaneously achieving a reduction of more than 50% in training and GPU inference time across nearly all settings.