🤖 AI Summary
This work addresses the degradation in perceptual quality of discrete speech synthesis caused by token-level artifacts and distributional drift in neural codec language models. To mitigate these issues, the authors propose MSpoof-TTS, a training-free inference framework that introduces, for the first time, a multi-resolution token-level spoofing detection mechanism. Integrated within a hierarchical decoding process, this mechanism dynamically prunes low-quality token candidates and re-ranks hypotheses under discriminator guidance, enabling high-fidelity zero-shot speech generation. Notably, MSpoof-TTS operates without modifying model parameters, significantly enhancing the robustness and naturalness of synthesized speech while effectively suppressing locally inconsistent or unrealistic audio segments.
📝 Abstract
Neural codec language models enable high-quality discrete speech synthesis, yet their inference remains vulnerable to token-level artifacts and distributional drift that degrade perceptual realism. Rather than relying on preference optimization or retraining, we propose MSpoof-TTS, a training-free inference framework that improves zero-shot synthesis through multi-resolution spoof guidance. We introduce a Multi-Resolution Token-based Spoof Detection framework that evaluates codec sequences at different temporal granularities to detect locally inconsistent or unnatural patterns. We then integrate the spoof detectors into a hierarchical decoding strategy, progressively pruning low-quality candidates and re-ranking hypotheses. This discriminator-guided generation enhances robustness without modifying model parameters. Experiments validate the effectiveness of our framework for robust and high-quality codec-based speech generation.