🤖 AI Summary
Autoregressive (AR) Transformer-based text-to-speech (TTS) systems suffer from word omission, repetition, and poor length generalization in long-sequence synthesis. To address this, we propose an alignment-free implicit monotonic alignment learning mechanism that integrates learnable relative position embeddings into cross-attention, jointly enforcing input–output monotonicity and modeling flexibility. Our method adopts an encoder–decoder architecture with alternately stacked multi-head self-attention and cross-attention modules, trained end-to-end without external alignment supervision. This is the first work to implicitly encode monotonic alignment constraints within AR Transformer TTS, significantly reducing word repetition and omission while enabling robust synthesis of arbitrarily long utterances. Objective and subjective evaluations show that our model matches the naturalness and expressiveness of the T5-based baseline, while substantially improving robustness and generalization on long sequences.
📝 Abstract
Autoregressive (AR) Transformer-based sequence models are known to have difficulty generalizing to sequences longer than those seen during training. When applied to text-to-speech (TTS), these models tend to drop or repeat words or produce erratic output, especially for longer utterances. In this paper, we introduce enhancements aimed at AR Transformer-based encoder-decoder TTS systems that address these robustness and length generalization issues. Our approach uses an alignment mechanism to provide cross-attention operations with relative location information. The associated alignment position is learned as a latent property of the model via backprop and requires no external alignment information during training. While the approach is tailored to the monotonic nature of TTS input-output alignment, it is still able to benefit from the flexible modeling power of interleaved multi-head self- and cross-attention operations. A system incorporating these improvements, which we call Very Attentive Tacotron, matches the naturalness and expressiveness of a baseline T5-based TTS system, while eliminating problems with repeated or dropped words and enabling generalization to any practical utterance length.