Vocoder-Projected Feature Discriminator

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-speech (TTS) and voice conversion (VC), modeling mel-spectrograms limits waveform quality, while adversarial training directly in the waveform domain incurs prohibitive up-sampling overhead. Method: We propose a time-domain feature-space adversarial training framework centered on the Vocoder-Projected Feature Discriminator (VPFD): a frozen pre-trained vocoder extracts intermediate-layer features, enabling adversarial loss computation in feature space with only a single up-sampling pass. The method integrates knowledge distillation from diffusion models to eliminate redundant up-sampling required by waveform-level discriminators. Contribution/Results: On voice conversion, VPFD achieves synthesis quality comparable to waveform discriminators while reducing training time by 9.6× and memory consumption by 11.4×, significantly enhancing the feasibility of efficient, high-fidelity speech generation.

Technology Category

Application Category

📝 Abstract
In text-to-speech (TTS) and voice conversion (VC), acoustic features, such as mel spectrograms, are typically used as synthesis or conversion targets owing to their compactness and ease of learning. However, because the ultimate goal is to generate high-quality waveforms, employing a vocoder to convert these features into waveforms and applying adversarial training in the time domain is reasonable. Nevertheless, upsampling the waveform introduces significant time and memory overheads. To address this issue, we propose a vocoder-projected feature discriminator (VPFD), which uses vocoder features for adversarial training. Experiments on diffusion-based VC distillation demonstrated that a pretrained and frozen vocoder feature extractor with a single upsampling step is necessary and sufficient to achieve a VC performance comparable to that of waveform discriminators while reducing the training time and memory consumption by 9.6 and 11.4 times, respectively.
Problem

Research questions and friction points this paper is trying to address.

Reducing time and memory overheads in adversarial training
Improving efficiency of waveform generation in TTS/VC
Enabling comparable performance with reduced computational resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vocoder-projected feature discriminator for adversarial training
Uses pretrained frozen vocoder feature extractor
Single upsampling step reduces time memory consumption
🔎 Similar Papers
No similar papers found.