🤖 AI Summary
To address the challenge of deploying high-complexity end-to-end (E2E) text-to-speech (TTS) models on low-resource edge devices, this work proposes a lightweight single-stage E2E TTS architecture. The method introduces a compact encoder-decoder structure, a jointly optimized waveform generation module, and an end-to-end joint training paradigm. The resulting model achieves only 0.5M parameters—90% fewer than mainstream E2E TTS models—and attains a real-time factor (RTF) of 0.1, representing a 10× speedup. On the LJSpeech benchmark, it surpasses two-stage systems of comparable size in speech quality, establishing new state-of-the-art (SOTA) performance for lightweight TTS. This work is the first to demonstrate that an ultra-lightweight E2E TTS system (<1M parameters) can simultaneously achieve high inference efficiency and superior naturalness over conventional two-stage approaches, thereby establishing a novel paradigm for real-time, edge-deployable speech synthesis.
📝 Abstract
Recent works have shown that modelling raw waveform directly from text in an end-to-end (E2E) fashion produces more natural-sounding speech than traditional neural text-to-speech (TTS) systems based on a cascade or two-stage approach. However, current E2E state-of-the-art models are computationally complex and memory-consuming, making them unsuitable for real-time offline on-device applications in low-resource scenarios. To address this issue, we propose a Lightweight E2E-TTS (LE2E) model that generates high-quality speech requiring minimal computational resources. We evaluate the proposed model on the LJSpeech dataset and show that it achieves state-of-the-art performance while being up to $90%$ smaller in terms of model parameters and $10 imes$ faster in real-time-factor. Furthermore, we demonstrate that the proposed E2E training paradigm achieves better quality compared to an equivalent architecture trained in a two-stage approach. Our results suggest that LE2E is a promising approach for developing real-time, high quality, low-resource TTS applications for on-device applications.