π€ AI Summary
This work addresses the significant degradation in generation quality observed in discrete diffusion language models when using few sampling steps, which hinders the simultaneous achievement of speed and performance. The authors propose a continuous-flow denoising language model that operates by continuously denoising one-hot token embeddings in Euclidean space. To enhance training stability, they introduce a time reparameterization technique and incorporate knowledge distillation to enable high-quality single-step generation. This approach challenges the prevailing assumption that discrete modalities necessitate discrete diffusion processes, marking the first successful realization of efficient single-step language generation based on continuous flows. On the LM1B and OpenWebText (OWT) benchmarks, the modelβs single-step outputs surpass the 8-step results of existing few-step methods, achieving state-of-the-art generation quality.
π Abstract
Language models based on discrete diffusion have attracted widespread interest for their potential to provide faster generation than autoregressive models. In practice, however, they exhibit a sharp degradation of sample quality in the few-step regime, failing to realize this promise. Here we show that language models leveraging flow-based continuous denoising can outperform discrete diffusion in both quality and speed. By revisiting the fundamentals of flows over discrete modalities, we build a flow-based language model (FLM) that performs Euclidean denoising over one-hot token encodings. We show that the model can be trained by predicting the clean data via a cross entropy objective, where we introduce a simple time reparameterization that greatly improves training stability and generation quality. By distilling FLM into its associated flow map, we obtain a distilled flow map language model (FMLM) capable of few-step generation. On the LM1B and OWT language datasets, FLM attains generation quality matching state-of-the-art discrete diffusion models. With FMLM, our approach outperforms recent few-step language models across the board, with one-step generation exceeding their 8-step quality. Our work calls into question the widely held hypothesis that discrete diffusion processes are necessary for generative modeling over discrete modalities, and paves the way toward accelerated flow-based language modeling at scale. Code is available at https://github.com/david3684/flm.