FabasedVC: Enhancing Voice Conversion with Text Modality Fusion and Phoneme-Level SSL Features

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly preserving semantic fidelity and accurately modeling target speaker timbre, prosody, and duration in voice conversion, this paper proposes an end-to-end voice conversion framework built upon VITS. The method innovatively integrates BERT-based text semantic encoding, phoneme-level self-supervised learning (SSL) features, and a learnable duration predictor. Frame-level SSL features are aligned to phonemes via attention-guided phoneme-level average pooling, while explicit phoneme duration constraints enable fine-grained prosody modeling. The duration predictor explicitly controls speech rhythm, enhancing duration consistency. Experiments demonstrate significant improvements over state-of-the-art methods across all three key metrics: naturalness (MOS), speaker similarity (SIM), and content preservation (WER). The proposed approach thus achieves a unified optimization of high-fidelity semantic reconstruction and high-similarity acoustic attribute transfer.

Technology Category

Application Category

📝 Abstract
In voice conversion (VC), it is crucial to preserve complete semantic information while accurately modeling the target speaker's timbre and prosody. This paper proposes FabasedVC to achieve VC with enhanced similarity in timbre, prosody, and duration to the target speaker, as well as improved content integrity. It is an end-to-end VITS-based VC system that integrates relevant textual modality information, phoneme-level self-supervised learning (SSL) features, and a duration predictor. Specifically, we employ a text feature encoder to encode attributes such as text, phonemes, tones and BERT features. We then process the frame-level SSL features into phoneme-level features using two methods: average pooling and attention mechanism based on each phoneme's duration. Moreover, a duration predictor is incorporated to better align the speech rate and prosody of the target speaker. Experimental results demonstrate that our method outperforms competing systems in terms of naturalness, similarity, and content integrity.
Problem

Research questions and friction points this paper is trying to address.

Preserve semantic information while modeling speaker timbre
Enhance timbre prosody duration similarity to target speaker
Improve content integrity through text-phoneme feature integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates text modality fusion for semantic enhancement
Uses phoneme-level SSL features with pooling and attention
Employs duration predictor for target speaker alignment
🔎 Similar Papers
No similar papers found.
W
Wenyu Wang
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China; SYKI-SPEECH Team, Xi’an, China
Z
Zhetao Hu
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China; SYKI-SPEECH Team, Xi’an, China
Y
Yiquan Zhou
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China; SYKI-SPEECH Team, Xi’an, China; AI Platform Department, bilibili, Shanghai, China
Jiacheng Xu
Jiacheng Xu
Nanyang Technological University
Reinforcement LearningLarge Language Model
Zhiyu Wu
Zhiyu Wu
DeepSeek-AI, 北京大学
MLLMEmotion RecognitionSemi-Supervised Learning
C
Chen Li
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
S
Shihao Li
Division of Music and Audio, Union Wheatland Culture and Media Ltd., Chengdu, China