🤖 AI Summary
This study addresses two critical limitations in discrete speech enhancement: (1) semantic tokens fail to preserve speaker identity and other fine-grained acoustic details, and (2) existing approaches over-rely on non-autoregressive modeling. To this end, we propose an autoregressive speech enhancement framework based on acoustic tokens. Our method employs a lightweight Transformer-based autoregressive decoder that jointly models semantic and acoustic discrete tokens, and systematically evaluates robustness across varying bitrates and noise levels. Experiments on VoiceBank and Libri1Mix demonstrate that acoustic tokens significantly outperform semantic tokens—especially in speaker identity preservation and intelligibility. Autoregressive modeling further improves speech quality (PESQ +0.32, STOI +1.8%), though discrete representations remain marginally inferior to continuous counterparts. This work is the first to empirically validate the efficacy of autoregressive modeling over acoustic tokens, establishing a new paradigm for discrete speech enhancement and enabling promising avenues for multimodal integration.
📝 Abstract
In speech processing pipelines, improving the quality and intelligibility of real-world recordings is crucial. While supervised regression is the primary method for speech enhancement, audio tokenization is emerging as a promising alternative for a smooth integration with other modalities. However, research on speech enhancement using discrete representations is still limited. Previous work has mainly focused on semantic tokens, which tend to discard key acoustic details such as speaker identity. Additionally, these studies typically employ non-autoregressive models, assuming conditional independence of outputs and overlooking the potential improvements offered by autoregressive modeling. To address these gaps we: 1) conduct a comprehensive study of the performance of acoustic tokens for speech enhancement, including the effect of bitrate and noise strength; 2) introduce a novel transducer-based autoregressive architecture specifically designed for this task. Experiments on VoiceBank and Libri1Mix datasets show that acoustic tokens outperform semantic tokens in terms of preserving speaker identity, and that our autoregressive approach can further improve performance. Nevertheless, we observe that discrete representations still fall short compared to continuous ones, highlighting the need for further research in this area.