Residual Tokens Enhance Masked Autoencoders for Speech Modeling

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing speech modeling approaches, which rely heavily on explicit attributes such as pitch, content, and speaker identity and thus struggle to capture implicit factors like timbre, emotion, and background noise. To overcome this, the authors propose RT-MAE, a novel framework that introduces trainable, unsupervised residual tokens into a masked autoencoder architecture. These residual tokens jointly model implicit speech characteristics alongside explicit attributes, enabling the encoding of complex, unannotated information in speech signals. The method significantly improves reconstruction quality and expressive naturalness while preserving content fidelity and speaker similarity. Furthermore, RT-MAE demonstrates strong performance in speech enhancement tasks, effectively balancing noise suppression with the retention of natural speech characteristics.

Technology Category

Application Category

📝 Abstract
Recent speech modeling relies on explicit attributes such as pitch, content, and speaker identity, but these alone cannot capture the full richness of natural speech. We introduce RT-MAE, a novel masked autoencoder framework that augments the supervised attributes-based modeling with unsupervised residual trainable tokens, designed to encode the information not explained by explicit labeled factors (e.g., timbre variations, noise, emotion etc). Experiments show that RT-MAE improves reconstruction quality, preserving content and speaker similarity while enhancing expressivity. We further demonstrate its applicability to speech enhancement, removing noise at inference while maintaining controllability and naturalness.
Problem

Research questions and friction points this paper is trying to address.

speech modeling
masked autoencoders
residual tokens
expressivity
naturalness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Tokens
Masked Autoencoder
Speech Modeling
Unsupervised Representation
Speech Enhancement
🔎 Similar Papers
No similar papers found.