🤖 AI Summary
This work addresses the limitations of existing speech modeling approaches, which rely heavily on explicit attributes such as pitch, content, and speaker identity and thus struggle to capture implicit factors like timbre, emotion, and background noise. To overcome this, the authors propose RT-MAE, a novel framework that introduces trainable, unsupervised residual tokens into a masked autoencoder architecture. These residual tokens jointly model implicit speech characteristics alongside explicit attributes, enabling the encoding of complex, unannotated information in speech signals. The method significantly improves reconstruction quality and expressive naturalness while preserving content fidelity and speaker similarity. Furthermore, RT-MAE demonstrates strong performance in speech enhancement tasks, effectively balancing noise suppression with the retention of natural speech characteristics.
📝 Abstract
Recent speech modeling relies on explicit attributes such as pitch, content, and speaker identity, but these alone cannot capture the full richness of natural speech. We introduce RT-MAE, a novel masked autoencoder framework that augments the supervised attributes-based modeling with unsupervised residual trainable tokens, designed to encode the information not explained by explicit labeled factors (e.g., timbre variations, noise, emotion etc). Experiments show that RT-MAE improves reconstruction quality, preserving content and speaker similarity while enhancing expressivity. We further demonstrate its applicability to speech enhancement, removing noise at inference while maintaining controllability and naturalness.