Cross-Lingual F5-TTS: Towards Language-Agnostic Voice Cloning and Speech Synthesis

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing flow-matching-based TTS models rely on text transcriptions aligned with audio prompts, limiting their applicability to transcription-free cross-lingual voice cloning—especially in low-resource or unseen-language scenarios. To address this, we propose the first transcription-free cross-lingual voice cloning framework. Our method leverages forced alignment to extract word-level boundaries and jointly models multi-granularity speaking rate and duration, explicitly decoupling phoneme duration prediction from spectrogram generation. Integrated into a flow-matching TTS architecture, the framework supports end-to-end training and inference. Experiments demonstrate that our approach preserves the high-fidelity synthesis quality of F5-TTS while substantially improving cross-lingual cloning performance. It exhibits strong generalization to transcription-free prompts, low-resource languages, and entirely unseen languages. This work establishes a novel paradigm for zero-shot voice cloning, eliminating reliance on textual supervision while maintaining linguistic and prosodic fidelity across diverse language settings.

Technology Category

Application Category

📝 Abstract
Flow-matching-based text-to-speech (TTS) models have shown high-quality speech synthesis. However, most current flow-matching-based TTS models still rely on reference transcripts corresponding to the audio prompt for synthesis. This dependency prevents cross-lingual voice cloning when audio prompt transcripts are unavailable, particularly for unseen languages. The key challenges for flow-matching-based TTS models to remove audio prompt transcripts are identifying word boundaries during training and determining appropriate duration during inference. In this paper, we introduce Cross-Lingual F5-TTS, a framework that enables cross-lingual voice cloning without audio prompt transcripts. Our method preprocesses audio prompts by forced alignment to obtain word boundaries, enabling direct synthesis from audio prompts while excluding transcripts during training. To address the duration modeling challenge, we train speaking rate predictors at different linguistic granularities to derive duration from speaker pace. Experiments show that our approach matches the performance of F5-TTS while enabling cross-lingual voice cloning.
Problem

Research questions and friction points this paper is trying to address.

Enables cross-lingual voice cloning without transcript dependency
Removes audio prompt transcript requirement for speech synthesis
Solves word boundary identification and duration modeling challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forced alignment for word boundary identification
Speaking rate predictors for duration modeling
Audio prompt synthesis without reference transcripts
🔎 Similar Papers
No similar papers found.
Qingyu Liu
Qingyu Liu
Electronic and Computer Engineering, Peking University
wireless networkingmobile networkinginternet of thingsintelligent transportation
Yushen Chen
Yushen Chen
Shanghai Jiao Tong University
Speech and Language Processing
Zhikang Niu
Zhikang Niu
Shanghai Jiao Tong University
Speech Synthesis
C
Chunhui Wang
Geely, China
Y
Yunting Yang
Geely, China
B
Bowen Zhang
Geely, China
J
Jian Zhao
Geely, China
Pengcheng Zhu
Pengcheng Zhu
Fuxi AI Lab, NetEase Inc.
speech synthesissinging voice synthesistalking avatarvoice conversion
K
Kai Yu
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, School of Computer Science, Shanghai Jiao Tong University, China
X
Xie Chen
MoE Key Lab of Artificial Intelligence, X-LANCE Lab, School of Computer Science, Shanghai Jiao Tong University, China