Fast-VGAN: Lightweight Voice Conversion with Explicit Control of F0 and Duration Parameters

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In voice conversion, fine-grained and disentangled control over key acoustic features—such as fundamental frequency (F0), phoneme duration, and phoneme sequence—remains a core challenge. This paper proposes a lightweight end-to-end voice conversion framework that eliminates reliance on traditional vocoders and complex disentanglement modules. Instead, it employs explicit conditional mechanisms to directly govern F0 contours, phoneme durations, and phoneme sequences. Mel-spectrogram generation is modeled using a convolutional network, enhanced with pretrained speaker embeddings, and high-fidelity waveforms are synthesized via a universal neural vocoder. The method achieves high naturalness and audio quality while significantly improving controllability and flexibility. Experiments demonstrate state-of-the-art performance on both speaker conversion and expressive speech synthesis tasks, as evidenced by competitive objective metrics (e.g., MCD, F0 RMSE) and superior subjective evaluations (similarity, naturalness, intelligibility).

Technology Category

Application Category

📝 Abstract
Precise control over speech characteristics, such as pitch, duration, and speech rate, remains a significant challenge in the field of voice conversion. The ability to manipulate parameters like pitch and syllable rate is an important element for effective identity conversion, but can also be used independently for voice transformation, achieving goals that were historically addressed by vocoder-based methods. In this work, we explore a convolutional neural network-based approach that aims to provide means for modifying fundamental frequency (F0), phoneme sequences, intensity, and speaker identity. Rather than relying on disentanglement techniques, our model is explicitly conditioned on these factors to generate mel spectrograms, which are then converted into waveforms using a universal neural vocoder. Accordingly, during inference, F0 contours, phoneme sequences, and speaker embeddings can be freely adjusted, allowing for intuitively controlled voice transformations. We evaluate our approach on speaker conversion and expressive speech tasks using both perceptual and objective metrics. The results suggest that the proposed method offers substantial flexibility, while maintaining high intelligibility and speaker similarity.
Problem

Research questions and friction points this paper is trying to address.

Precise control of pitch and duration in voice conversion
Lightweight model for modifying F0 and speaker identity
Flexible voice transformation without vocoder-based methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

CNN-based model for voice parameter control
Explicit conditioning on F0 and phonemes
Universal neural vocoder for waveform conversion
🔎 Similar Papers
No similar papers found.