π€ AI Summary
Autoregressive language models suffer from sequential decoding bottlenecks and weak global coherence, while text diffusion models have progressed slowly due to the difficulty of modeling high-dimensional discrete token spaces. To address this, we propose Cosmos: the first diffusion-based text generation framework built upon a learned compressed latent space. Cosmos employs a jointly trained autoencoder to achieve 8Γ sequence compression and introduces pre-trained language model activation alignment and reconstruction constraints to preserve semantic fidelity. Diffusion training is conducted in the frozen encoderβs latent space with perturbation-augmented learning. Evaluated on story generation, question generation, summarization, and detoxification, Cosmos matches or surpasses both autoregressive and existing diffusion baselines in generation quality, while accelerating inference by over 2Γ. It thus achieves a favorable trade-off among generation quality, efficiency, and controllability.
π Abstract
Autoregressive language models dominate modern text generation, yet their sequential nature introduces fundamental limitations: decoding is slow, and maintaining global coherence remains challenging. Diffusion models offer a promising alternative by enabling parallel generation and flexible control; however, their application to text generation is hindered by the high dimensionality of token-level representations. We introduce Cosmos, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This space is learned using an autoencoder trained simultaneously for token-level reconstruction and alignment with frozen activations from a pretrained language encoder, providing robust semantic grounding and enabling effective perturbation-based augmentations. Empirically, we demonstrate that text representations can be compressed by $8 imes$ while maintaining generation quality comparable to token-level diffusion models. Furthermore, increasing the latent sequence length allows Cosmos to surpass both diffusion-based and autoregressive baselines. We evaluate Cosmos on four diverse generative tasks including story generation, question generation, summarization, and detoxification and compare it with various generative paradigms. Cosmos achieves comparable or superior generation quality while offering more than $2 imes$ faster inference.