🤖 AI Summary
Existing zero-shot text-to-speech (TTS) models struggle to capture the complex coupling between acoustic and semantic features, resulting in limited expressiveness and low speaker similarity. To address this, we propose a novel autoregressive–non-autoregressive collaborative zero-shot TTS framework. Our method introduces a Parallel Tokenizer to jointly generate discrete semantic and acoustic tokens; designs a coupled non-autoregressive decoder to explicitly model their interdependence; and incorporates a cross-modal feature alignment mechanism for hierarchical fusion. Built upon large language model architecture, the framework balances modeling capacity with inference efficiency. Extensive experiments on multiple Chinese and English datasets demonstrate significant improvements over state-of-the-art methods: higher naturalness and speaker similarity, along with faster synthesis speed. This work establishes a new paradigm for high-quality zero-shot TTS.
📝 Abstract
Advances in speech representation and large language models have enhanced zero-shot text-to-speech (TTS) performance. However, existing zero-shot TTS models face challenges in capturing the complex correlations between acoustic and semantic features, resulting in a lack of expressiveness and similarity. The primary reason lies in the complex relationship between semantic and acoustic features, which manifests independent and interdependent aspects.This paper introduces a TTS framework that combines both autoregressive (AR) and non-autoregressive (NAR) modules to harmonize the independence and interdependence of acoustic and semantic information. The AR model leverages the proposed Parallel Tokenizer to synthesize the top semantic and acoustic tokens simultaneously. In contrast, considering the interdependence, the Coupled NAR model predicts detailed tokens based on the general AR model's output. Parallel GPT, built on this architecture, is designed to improve zero-shot text-to-speech synthesis through its parallel structure. Experiments on English and Chinese datasets demonstrate that the proposed model significantly outperforms the quality and efficiency of the synthesis of existing zero-shot TTS models. Speech demos are available at https://t1235-ch.github.io/pgpt/.