Lightweight End-to-end Text-to-speech Synthesis for low resource on-device applications

📅 2023-08-26
🏛️ Speech Synthesis Workshop
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying high-complexity end-to-end (E2E) text-to-speech (TTS) models on low-resource edge devices, this work proposes a lightweight single-stage E2E TTS architecture. The method introduces a compact encoder-decoder structure, a jointly optimized waveform generation module, and an end-to-end joint training paradigm. The resulting model achieves only 0.5M parameters—90% fewer than mainstream E2E TTS models—and attains a real-time factor (RTF) of 0.1, representing a 10× speedup. On the LJSpeech benchmark, it surpasses two-stage systems of comparable size in speech quality, establishing new state-of-the-art (SOTA) performance for lightweight TTS. This work is the first to demonstrate that an ultra-lightweight E2E TTS system (<1M parameters) can simultaneously achieve high inference efficiency and superior naturalness over conventional two-stage approaches, thereby establishing a novel paradigm for real-time, edge-deployable speech synthesis.

Technology Category

Application Category

📝 Abstract
Recent works have shown that modelling raw waveform directly from text in an end-to-end (E2E) fashion produces more natural-sounding speech than traditional neural text-to-speech (TTS) systems based on a cascade or two-stage approach. However, current E2E state-of-the-art models are computationally complex and memory-consuming, making them unsuitable for real-time offline on-device applications in low-resource scenarios. To address this issue, we propose a Lightweight E2E-TTS (LE2E) model that generates high-quality speech requiring minimal computational resources. We evaluate the proposed model on the LJSpeech dataset and show that it achieves state-of-the-art performance while being up to $90%$ smaller in terms of model parameters and $10 imes$ faster in real-time-factor. Furthermore, we demonstrate that the proposed E2E training paradigm achieves better quality compared to an equivalent architecture trained in a two-stage approach. Our results suggest that LE2E is a promising approach for developing real-time, high quality, low-resource TTS applications for on-device applications.
Problem

Research questions and friction points this paper is trying to address.

Developing lightweight E2E-TTS for low-resource devices
Reducing computational complexity in real-time TTS systems
Improving speech quality with minimal model parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight E2E-TTS model for low-resource devices
Generates high-quality speech with minimal resources
90% smaller and 10x faster than current models
🔎 Similar Papers
No similar papers found.
Biel Tura Vecino
Biel Tura Vecino
Applied Scientist
A
Adam Gabrys
Alexa AI
D
Daniel Matwicki
Alexa AI
Andrzej Pomirski
Andrzej Pomirski
Senior Software Development Engineer, Amazon Alexa AI
T
Tom Iddon
Alexa AI
M
Marius Cotescu
Alexa AI
Jaime Lorenzo-Trueba
Jaime Lorenzo-Trueba
Assembly AI
SpeechML