Multi-Scale Accent Modeling and Disentangling for Multi-Speaker Multi-Accent Text-to-Speech Synthesis

📅 2024-06-16
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the strong coupling between speaker identity and accent characteristics in multi-speaker, multi-accent text-to-speech (TTS), this paper proposes an end-to-end disentanglement framework. Our method introduces a novel multi-scale accent modeling approach—combining global utterance-level and local phoneme-level representations—integrated with adversarial speaker disentanglement, phoneme-level accent prediction, and accent modulation modules. Crucially, it enables reference-free, accent-controllable synthesis without requiring phoneme-level accent annotations. The framework supports flexible accent switching for the same speaker while preserving individual acoustic characteristics. Evaluated on an English multi-accent dataset, our model achieves a 0.42 MOS improvement and an 18.7% increase in accent similarity over baselines. Ablation studies confirm the efficacy of each component. This work establishes a new paradigm for high-fidelity, editable multi-accent TTS systems.

Technology Category

Application Category

📝 Abstract
Generating speech across different accents while preserving speaker identity is crucial for various real-world applications. However, accurately and independently modeling both speaker and accent characteristics in text-to-speech (TTS) systems is challenging due to the complex variations of accents and the inherent entanglement between speaker and accent identities. In this paper, we propose a novel approach for multi-speaker multi-accent TTS synthesis that aims to synthesize speech for multiple speakers, each with various accents. Our approach employs a multi-scale accent modeling strategy to address accent variations on different levels. Specifically, we introduce both global (utterance level) and local (phoneme level) accent modeling to capture overall accent characteristics within an utterance and fine-grained accent variations across phonemes, respectively. To enable independent control of speakers and accents, we use the speaker embedding to represent speaker identity and achieve speaker-independent accent control through speaker disentanglement within the multi-scale accent modeling. Additionally, we present a local accent prediction model that enables our system to generate accented speech directly from phoneme inputs. We conduct extensive experiments on an English accented speech corpus. Experimental results demonstrate that our proposed system outperforms baseline systems in terms of speech quality and accent rendering for generating multi-speaker multi-accent speech. Ablation studies further validate the effectiveness of different components in our proposed system.
Problem

Research questions and friction points this paper is trying to address.

Text-to-Speech
Accent Diversity
Individual Characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilayer Accent Processing
Independent Control
High-quality Accent Speech Generation
🔎 Similar Papers
No similar papers found.