🤖 AI Summary
Existing flow-matching-based TTS models rely on text transcriptions aligned with audio prompts, limiting their applicability to transcription-free cross-lingual voice cloning—especially in low-resource or unseen-language scenarios. To address this, we propose the first transcription-free cross-lingual voice cloning framework. Our method leverages forced alignment to extract word-level boundaries and jointly models multi-granularity speaking rate and duration, explicitly decoupling phoneme duration prediction from spectrogram generation. Integrated into a flow-matching TTS architecture, the framework supports end-to-end training and inference. Experiments demonstrate that our approach preserves the high-fidelity synthesis quality of F5-TTS while substantially improving cross-lingual cloning performance. It exhibits strong generalization to transcription-free prompts, low-resource languages, and entirely unseen languages. This work establishes a novel paradigm for zero-shot voice cloning, eliminating reliance on textual supervision while maintaining linguistic and prosodic fidelity across diverse language settings.
📝 Abstract
Flow-matching-based text-to-speech (TTS) models have shown high-quality speech synthesis. However, most current flow-matching-based TTS models still rely on reference transcripts corresponding to the audio prompt for synthesis. This dependency prevents cross-lingual voice cloning when audio prompt transcripts are unavailable, particularly for unseen languages. The key challenges for flow-matching-based TTS models to remove audio prompt transcripts are identifying word boundaries during training and determining appropriate duration during inference. In this paper, we introduce Cross-Lingual F5-TTS, a framework that enables cross-lingual voice cloning without audio prompt transcripts. Our method preprocesses audio prompts by forced alignment to obtain word boundaries, enabling direct synthesis from audio prompts while excluding transcripts during training. To address the duration modeling challenge, we train speaking rate predictors at different linguistic granularities to derive duration from speaker pace. Experiments show that our approach matches the performance of F5-TTS while enabling cross-lingual voice cloning.