π€ AI Summary
This work challenges the conventional left-to-right generation order in autoregressive speech synthesis, whose optimality remains unverified. Leveraging a masked diffusion framework, the proposed approach enables arbitrary decoding orders during both training and inference, allowing for a systematic evaluation of fixed versus adaptive strategies on synthesis quality. By incorporating discrete acoustic representations, a position-wise progressive demasking mechanism, and a Top-K adaptive decoding strategy, the study demonstrates that decoding order significantly influences speech fidelity, with adaptive strategies consistently outperforming fixed ones. Notably, high-fidelity speech can still be generated under an extremely low 1-bit quantization condition, underscoring the methodβs efficiency and robustness.
π Abstract
Autoregressive speech synthesis often adopts a left-to-right order, yet generation order is a modelling choice. We investigate decoding order through masked diffusion framework, which progressively unmasks positions and allows arbitrary decoding orders during training and inference. By interpolating between identity and random permutations, we show that randomness in decoding order affects speech quality. We further compare fixed strategies, such as \texttt{l2r} and \texttt{r2l} with adaptive ones, such as Top-$K$, finding that fixed-order decoding, including the dominating left-to-right approach, is suboptimal, while adaptive decoding yields better performance. Finally, since masked diffusion requires discrete inputs, we quantise acoustic representations and find that even 1-bit quantisation can support reasonably high-quality speech.