🤖 AI Summary
This work proposes a non-intrusive speech intelligibility prediction method based on a bottleneck Transformer architecture, addressing the limitation of traditional Short-Time Objective Intelligibility (STOI) metrics that rely on clean reference speech and thus struggle in real-world, reference-free scenarios. The proposed model integrates convolutional modules to extract frame-level acoustic features and employs multi-head self-attention to emphasize critical time-frequency information. To enhance representation capability, it further fuses self-supervised learning with spectral features. Experimental results demonstrate that the method significantly outperforms existing approaches in both seen and unseen test conditions, achieving higher prediction correlation and lower mean squared error.
📝 Abstract
In this study, we have presented a novel approach to predict the Short-Time Objective Intelligibility (STOI) metric using a bottleneck transformer architecture. Traditional methods for calculating STOI typically requires clean reference speech, which limits their applicability in the real world. To address this, numerous deep learning-based nonintrusive speech assessment models have garnered significant interest. Many studies have achieved commendable performance, but there is room for further improvement.
We propose the use of bottleneck transformer, incorporating convolution blocks for learning frame-level features and a multi-head self-attention (MHSA) layer to aggregate the information. These components enable the transformer to focus on the key aspects of the input data. Our model has shown higher correlation and lower mean squared error for both seen and unseen scenarios compared to the state-of-the-art model using self-supervised learning (SSL) and spectral features as inputs.