Conan: A Chunkwise Online Network for Zero-Shot Adaptive Voice Conversion

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing zero-shot voice conversion methods struggle to simultaneously achieve semantic fidelity, timbre naturalness, and generalization to unseen speakers under real-time constraints. This paper proposes a chunk-based online voice conversion framework comprising a streaming content extractor (built upon Emformer), a fine-grained adaptive style encoder, and a causal pixel-shuffling vocoder (an enhanced HiFi-GAN variant). The architecture enables low-latency (<100 ms), high-fidelity end-to-end voice conversion with zero-shot target speaker adaptation. Objective evaluations demonstrate a 12.3% reduction in Mel Cepstral Distortion (MCD) and an 8.7% improvement in speaker similarity (SIM) over prior work. Subjective listening tests confirm significant gains in both semantic intelligibility and timbre similarity compared to state-of-the-art baselines. This work establishes a new paradigm for efficient and robust real-time voice conversion, particularly suited for applications in live communication and interactive entertainment.

Technology Category

Application Category

📝 Abstract
Zero-shot online voice conversion (VC) holds significant promise for real-time communications and entertainment. However, current VC models struggle to preserve semantic fidelity under real-time constraints, deliver natural-sounding conversions, and adapt effectively to unseen speaker characteristics. To address these challenges, we introduce Conan, a chunkwise online zero-shot voice conversion model that preserves the content of the source while matching the voice timbre and styles of reference speech. Conan comprises three core components: 1) a Stream Content Extractor that leverages Emformer for low-latency streaming content encoding; 2) an Adaptive Style Encoder that extracts fine-grained stylistic features from reference speech for enhanced style adaptation; 3) a Causal Shuffle Vocoder that implements a fully causal HiFiGAN using a pixel-shuffle mechanism. Experimental evaluations demonstrate that Conan outperforms baseline models in subjective and objective metrics. Audio samples can be found at https://aaronz345.github.io/ConanDemo.
Problem

Research questions and friction points this paper is trying to address.

Preserve semantic fidelity in real-time voice conversion
Deliver natural-sounding conversions under real-time constraints
Adapt effectively to unseen speaker characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Emformer-based low-latency streaming content encoding
Fine-grained adaptive style encoder for references
Causal HiFiGAN vocoder with pixel-shuffle
🔎 Similar Papers
No similar papers found.