WhispSynth: Scaling Multilingual Whisper Corpus through Real Data Curation and A Novel Pitch-free Generative Framework

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of acquiring high-quality whispered speech, which is inherently low-amplitude and has severely hindered research in whisper-based speech generation and recognition. To overcome the limitations of conventional synthetic or noise-contaminated data, this work proposes a pitch-free differentiable digital signal processing (DDSP) framework integrated with text-to-speech (TTS) synthesis, preserving both original timbre and linguistic content while ensuring acoustic consistency. Leveraging this approach, we construct WhispSynth—a high-fidelity multilingual whispered speech corpus comprising 118 hours of audio from 479 speakers—and combine it with the real-world whisper dataset WhispNJU through a multilingual refinement pipeline. Fine-tuning the CosyWhisper model on this corpus yields synthetic whispers whose naturalness closely matches that of authentic whispered samples.

Technology Category

Application Category

📝 Abstract
Whisper generation is constrained by the difficulty of data collection. Because whispered speech has low acoustic amplitude, high-fidelity recording is challenging. In this paper, we introduce WhispSynth, a large-scale multilingual corpus constructed via a novel high-fidelity generative framework. Specifically, we propose a pipeline integrating Differentiable Digital Signal Processing (DDSP)-based pitch-free method with Text-to-Speech (TTS) models. This framework refines a comprehensive collection of resources, including our newly constructed WhispNJU dataset, into 118 hours of high-fidelity whispered speech from 479 speakers. Unlike standard synthetic or noisy real data, our data engine faithfully preserves source vocal timbre and linguistic content while ensuring acoustic consistency, providing a robust foundation for text-to-whisper research. Experimental results demonstrate that WhispSynth exhibits significantly higher quality than existing corpora. Moreover, our CosyWhisper, tuned with WhispSynth, achieves speech naturalness on par with ground-truth samples. The official implementation and related resources are available at https://github.com/tan90xx/cosywhisper.
Problem

Research questions and friction points this paper is trying to address.

whispered speech
data scarcity
multilingual corpus
high-fidelity recording
text-to-whisper
Innovation

Methods, ideas, or system contributions that make the work stand out.

pitch-free synthesis
Differentiable Digital Signal Processing (DDSP)
whispered speech corpus
multilingual TTS
high-fidelity voice generation
🔎 Similar Papers
No similar papers found.
T
Tianyi Tan
Key Laboratory of Modern Acoustics, Nanjing University, Nanjing 210093, China
J
Jiaxin Ye
Fudan University, Shanghai, China
Y
Yuanming Zhang
Key Laboratory of Modern Acoustics, Nanjing University, Nanjing 210093, China
X
Xiaohuai Le
ByteDance, China
X
Xianjun Xia
ByteDance, China
C
Chuanzeng Huang
ByteDance, China
Jing Lu
Jing Lu
University of California, Santa Barbara
ElectronicsMOCVD material growth