Sidon: Fast and Robust Open-Source Multilingual Speech Restoration for Large-scale Dataset Cleansing

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of multilingual text-to-speech (TTS) systems caused by insufficient high-quality recorded speech data, this paper proposes an open-source speech restoration framework that efficiently converts noisy, in-the-wild speech into studio-grade clean speech. Methodologically, it employs a fine-tuned w2v-BERT 2.0 predictor to extract robust acoustic features, coupled with a lightweight, domain-specific vocoder for end-to-end speech purification. The system achieves real-time inference throughput of 3390× on a single GPU and supports zero-shot generalization across dozens of languages. Key contributions include: (1) the first fully open-source framework achieving restoration performance comparable to Google’s Miipher; and (2) successful application to clean ASR corpora curated by Sidon, enabling training of TTS models with significantly improved synthesis naturalness and intelligibility—thereby demonstrating its effectiveness and practicality for large-scale, real-world multilingual speech corpus cleaning.

Technology Category

Application Category

📝 Abstract
Large-scale text-to-speech (TTS) systems are limited by the scarcity of clean, multilingual recordings. We introduce Sidon, a fast, open-source speech restoration model that converts noisy in-the-wild speech into studio-quality speech and scales to dozens of languages. Sidon consists of two models: w2v-BERT 2.0 finetuned feature predictor to cleanse features from noisy speech and vocoder trained to synthesize restored speech from the cleansed features. Sidon achieves restoration performance comparable to Miipher: Google's internal speech restoration model with the aim of dataset cleansing for speech synthesis. Sidon is also computationally efficient, running up to 3,390 times faster than real time on a single GPU. We further show that training a TTS model using a Sidon-cleansed automatic speech recognition corpus improves the quality of synthetic speech in a zero-shot setting. Code and model are released to facilitate reproducible dataset cleansing for the research community.
Problem

Research questions and friction points this paper is trying to address.

Addressing scarcity of clean multilingual recordings for TTS systems
Converting noisy real-world speech into studio-quality recordings
Enabling large-scale dataset cleansing for speech synthesis research
Innovation

Methods, ideas, or system contributions that make the work stand out.

w2v-BERT 2.0 fine-tuned feature predictor
Vocoder trained on cleansed features
Fast multilingual speech restoration for dataset cleansing
🔎 Similar Papers
No similar papers found.