🤖 AI Summary
The encoding mechanism of accent information in Discrete Speech Representation Tokens (DSRTs) remains poorly understood, and systematic evaluation frameworks or effective modeling approaches are lacking. This work proposes a unified evaluation framework featuring an Accent ABX task and cross-accent speech conversion resynthesis experiments to systematically analyze accent representation characteristics in DSRTs. Building on these insights, the study introduces novel DSRT architectures—content-specific and content-accent joint models—to enable finer-grained accent control. The findings reveal that ASR fine-tuning substantially attenuates accent information, while naive codebook reduction fails to effectively disentangle content from accent. The proposed methods significantly outperform existing approaches in accent-controllable speech generation.
📝 Abstract
Discrete Speech Representation Tokens (DSRTs) have become a foundational component in speech generation. While prior work has extensively studied phonetic and speaker information in DSRTs, how accent information is encoded in DSRTs remains largely unexplored. In this paper, we present the first systematic investigation of accent information in DSRTs. We propose a unified evaluation framework that measures both accessibility of accent information via a novel Accent ABX task and recoverability via cross-accent Voice Conversion (VC) resynthesis. Using this framework, we analyse DSRTs derived from a variety of speech encoders. Our results reveal that accent information is substantially reduced when ASR supervision is used to fine-tune the encoder, but cannot be effectively disentangled from phonetic and speaker information through naive codebook size reduction. Based on these findings, we propose new content-only and content-accent DSRTs that significantly outperform existing designs in controllable accent generation. Our work highlights the importance of accent-aware evaluation and provides practical guidance for designing DSRTs for accent-controlled speech generation.