🤖 AI Summary
To address the demand for high-quality, real-time, multilingual text-to-speech (TTS) systems with fine-grained controllability over emotion and non-linguistic vocalizations, this paper introduces SpeechLM—a dual-Transformer autoregressive speech-language model—comprising TTS-1 (1.6B parameters, enabling low-latency 48 kHz on-device synthesis) and TTS-1-Max (8.8B parameters, achieving state-of-the-art audio fidelity). We propose a novel three-stage alignment pipeline: pretraining → supervised fine-tuning → reinforcement learning from human feedback (RLHF), enabling context-based, token-level control over multilingual output (11 languages), nuanced prosody, and non-linguistic sounds (e.g., laughter, sighs) without explicit conditioning. To our knowledge, this is the first multilingual TTS framework unifying semantic, prosodic, and paralinguistic modeling. Both models establish new SOTA across multiple benchmarks. All code and training details are publicly released.
📝 Abstract
We introduce Inworld TTS-1, a set of two Transformer-based autoregressive text-to-speech (TTS) models. Our largest model, TTS-1-Max, has 8.8B parameters and is designed for utmost quality and expressiveness in demanding applications. TTS-1 is our most efficient model, with 1.6B parameters, built for real-time speech synthesis and on-device use cases. By scaling train-time compute and applying a sequential process of pre-training, fine-tuning, and RL-alignment of the speech-language model (SpeechLM) component, both models achieve state-of-the-art performance on a variety of benchmarks, demonstrating exceptional quality relying purely on in-context learning of the speaker's voice. Inworld TTS-1 and TTS-1-Max can generate high-resolution 48 kHz speech with low latency, and support 11 languages with fine-grained emotional control and non-verbal vocalizations through audio markups. We additionally open-source our training and modeling code under an MIT license.