🤖 AI Summary
Autoregressive models (ARMs) struggle to capture non-sequential dependencies, while masked diffusion models (MDMs) suffer from inconsistency during simultaneous multi-token unmasking and cannot handle arbitrary-position constraints with unknown fill lengths. To address these limitations, we propose Insertion Language Models (ILMs), the first generative framework that jointly models *insertion position* and *token selection*, enabling single-step, incremental token insertion at arbitrary positions in a sequence. This supports dependency-driven, non-sequential, and dynamically lengthened generation. ILMs employ a customized denoising objective to end-to-end learn both insertion policies and content generation without requiring predefined fill lengths. Experiments demonstrate that ILMs significantly outperform ARMs and MDMs on planning tasks; achieve unconditional text generation quality on par with ARMs; and surpass MDMs in flexibility for arbitrary-length text infilling.
📝 Abstract
Autoregressive models (ARMs), which predict subsequent tokens one-by-one ``from left to right,'' have achieved significant success across a wide range of sequence generation tasks. However, they struggle to accurately represent sequences that require satisfying sophisticated constraints or whose sequential dependencies are better addressed by out-of-order generation. Masked Diffusion Models (MDMs) address some of these limitations, but the process of unmasking multiple tokens simultaneously in MDMs can introduce incoherences, and MDMs cannot handle arbitrary infilling constraints when the number of tokens to be filled in is not known in advance. In this work, we introduce Insertion Language Models (ILMs), which learn to insert tokens at arbitrary positions in a sequence -- that is, they select jointly both the position and the vocabulary element to be inserted. By inserting tokens one at a time, ILMs can represent strong dependencies between tokens, and their ability to generate sequences in arbitrary order allows them to accurately model sequences where token dependencies do not follow a left-to-right sequential structure. To train ILMs, we propose a tailored network parameterization and use a simple denoising objective. Our empirical evaluation demonstrates that ILMs outperform both ARMs and MDMs on common planning tasks. Furthermore, we show that ILMs outperform MDMs and perform on par with ARMs in an unconditional text generation task while offering greater flexibility than MDMs in arbitrary-length text infilling.