MiniMax-Speech: Intrinsic Zero-Shot Text-to-Speech with a Learnable Speaker Encoder

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in zero-shot voice cloning—namely, reliance on reference speech transcription, low speaker identity fidelity, and limited cross-lingual support—by proposing MiniMax-Speech, a Transformer-based autoregressive text-to-speech (TTS) model. Its core innovation is a learnable speaker encoder that enables robust, transcription-free speaker embedding extraction and disentanglement. Integrated with Flow-VAE acoustic modeling, cross-lingual phoneme representations, and LoRA-based emotion fine-tuning, the model supports zero-shot TTS, text-to-voice (T2V), and professional-grade voice cloning (PVC). Trained on 32 languages, MiniMax-Speech achieves state-of-the-art performance on TTS Arena, attaining SOTA objective scores (WER, Speaker Similarity) and demonstrating superior subjective quality and multilingual synthesis capability.

Technology Category

Application Category

📝 Abstract
We introduce MiniMax-Speech, an autoregressive Transformer-based Text-to-Speech (TTS) model that generates high-quality speech. A key innovation is our learnable speaker encoder, which extracts timbre features from a reference audio without requiring its transcription. This enables MiniMax-Speech to produce highly expressive speech with timbre consistent with the reference in a zero-shot manner, while also supporting one-shot voice cloning with exceptionally high similarity to the reference voice. In addition, the overall quality of the synthesized audio is enhanced through the proposed Flow-VAE. Our model supports 32 languages and demonstrates excellent performance across multiple objective and subjective evaluations metrics. Notably, it achieves state-of-the-art (SOTA) results on objective voice cloning metrics (Word Error Rate and Speaker Similarity) and has secured the top position on the public TTS Arena leaderboard. Another key strength of MiniMax-Speech, granted by the robust and disentangled representations from the speaker encoder, is its extensibility without modifying the base model, enabling various applications such as: arbitrary voice emotion control via LoRA; text to voice (T2V) by synthesizing timbre features directly from text description; and professional voice cloning (PVC) by fine-tuning timbre features with additional data. We encourage readers to visit https://minimax-ai.github.io/tts_tech_report for more examples.
Problem

Research questions and friction points this paper is trying to address.

Develop zero-shot TTS with learnable speaker encoder
Enable expressive speech and high-similarity voice cloning
Support multilingual applications and extensible model features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive Transformer-based TTS model
Learnable speaker encoder for zero-shot timbre
Flow-VAE enhances synthesized audio quality
🔎 Similar Papers
No similar papers found.
B
Bowen Zhang
C
Congchao Guo
G
Geng Yang
H
Hang Yu
H
Haozhe Zhang
H
Heidi Lei
J
Jialong Mai
J
Junjie Yan
K
Kaiyue Yang
Mingqi Yang
Mingqi Yang
South China University of Technology
Graph Representation LearningConstraint Programming
P
Peikai Huang
R
Ruiyang Jin
S
Sitan Jiang
Weihua Cheng
Weihua Cheng
Shanghaitech University
LLMAgentKnowledge Tracing
Y
Yawei Li
Y
Yichen Xiao
Y
Yiying Zhou
Y
Yongmao Zhang
Yuan Lu
Yuan Lu
I-squared-R
BlockchainsDistributed ComputingDecentralization
Y
Yucen He