A Unified Neural Codec Language Model for Selective Editable Text to Speech Generation

πŸ“… 2026-01-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited controllability of existing neural codec language models in zero-shot speech synthesis, which can only holistically imitate the reference utterance without selective manipulation of specific acoustic attributes such as voice timbre or prosody. To overcome this limitation, we propose SpeechEditβ€”a unified neural codec language model that enables flexible, instruction-driven editing of targeted acoustic properties while preserving other characteristics of the prompt speech by default. We introduce LibriEdit, a dataset comprising differential-aware training pairs, and employ a differential-aware training strategy to achieve precise and localized attribute control. Experimental results demonstrate that SpeechEdit significantly enhances controllability over desired acoustic attributes while maintaining high naturalness and robustness in the synthesized speech.

Technology Category

Application Category

πŸ“ Abstract
Neural codec language models achieve impressive zero-shot Text-to-Speech (TTS) by fully imitating the acoustic characteristics of a short speech prompt, including timbre, prosody, and paralinguistic information. However, such holistic imitation limits their ability to isolate and control individual attributes. In this paper, we present a unified codec language model SpeechEdit that extends zero-shot TTS with a selective control mechanism. By default, SpeechEdit reproduces the complete acoustic profile inferred from the speech prompt, but it selectively overrides only the attributes specified by explicit control instructions. To enable controllable modeling, SpeechEdit is trained on our newly constructed LibriEdit dataset, which provides delta (difference-aware) training pairs derived from LibriHeavy. Experimental results show that our approach maintains naturalness and robustness while offering flexible and localized control over desired attributes. Audio samples are available at https://speech-editing.github.io/speech-editing/.
Problem

Research questions and friction points this paper is trying to address.

zero-shot TTS
selective control
acoustic attributes
neural codec language model
speech editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

selective control
neural codec language model
zero-shot TTS
editable speech synthesis
delta training pairs
πŸ”Ž Similar Papers
No similar papers found.