🤖 AI Summary
Positional encoding (PE) in time-frequency (TF) dual-path Transformer architectures for speech separation often degrades generalization to unseen sequence lengths and sampling rates—critical for real-world robustness. Method: We systematically evaluate the impact of PE on the TF-Locoformer architecture, comparing absolute/relative PE, RoPE, ALiBi, and PE-free variants, integrated with TF masking and dual-path recurrence. Contribution/Results: Contrary to conventional wisdom, PE improves in-distribution performance but severely harms length extrapolation and sampling-rate-agnostic generalization. Removing PE entirely—and augmenting with lightweight convolutional layers—yields superior out-of-distribution generalization: up to +2.1 dB SNR improvement on unseen lengths and across multiple sampling rates. This work is the first to demonstrate that a PE-free design can simultaneously preserve modeling capacity and achieve strong extrapolation in dual-path TF-Transformers, establishing a new paradigm for robust speech separation.
📝 Abstract
In this study, we investigate the impact of positional encoding (PE) on source separation performance and the generalization ability to long sequences (length extrapolation) in Transformer-based time-frequency (TF) domain dual-path models. The length extrapolation capability in TF-domain dual-path models is a crucial factor, as it affects not only their performance on long-duration inputs but also their generalizability to signals with unseen sampling rates. While PE is known to significantly impact length extrapolation, there has been limited research that explores the choice of PEs for TF-domain dual-path models from this perspective. To address this gap, we compare various PE methods using a recent state-of-the-art model, TF-Locoformer, as the base architecture. Our analysis yields the following key findings: (i) When handling sequences that are the same length as or shorter than those seen during training, models with PEs achieve better performance. (ii) However, models without PE exhibit superior length extrapolation. This trend is particularly pronounced when the model contains convolutional layers.