Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive models (ARMs) struggle to capture non-sequential dependencies, while masked diffusion models (MDMs) suffer from inconsistency during simultaneous multi-token unmasking and cannot handle arbitrary-position constraints with unknown fill lengths. To address these limitations, we propose Insertion Language Models (ILMs), the first generative framework that jointly models *insertion position* and *token selection*, enabling single-step, incremental token insertion at arbitrary positions in a sequence. This supports dependency-driven, non-sequential, and dynamically lengthened generation. ILMs employ a customized denoising objective to end-to-end learn both insertion policies and content generation without requiring predefined fill lengths. Experiments demonstrate that ILMs significantly outperform ARMs and MDMs on planning tasks; achieve unconditional text generation quality on par with ARMs; and surpass MDMs in flexibility for arbitrary-length text infilling.

Technology Category

Application Category

📝 Abstract
Autoregressive models (ARMs), which predict subsequent tokens one-by-one ``from left to right,'' have achieved significant success across a wide range of sequence generation tasks. However, they struggle to accurately represent sequences that require satisfying sophisticated constraints or whose sequential dependencies are better addressed by out-of-order generation. Masked Diffusion Models (MDMs) address some of these limitations, but the process of unmasking multiple tokens simultaneously in MDMs can introduce incoherences, and MDMs cannot handle arbitrary infilling constraints when the number of tokens to be filled in is not known in advance. In this work, we introduce Insertion Language Models (ILMs), which learn to insert tokens at arbitrary positions in a sequence -- that is, they select jointly both the position and the vocabulary element to be inserted. By inserting tokens one at a time, ILMs can represent strong dependencies between tokens, and their ability to generate sequences in arbitrary order allows them to accurately model sequences where token dependencies do not follow a left-to-right sequential structure. To train ILMs, we propose a tailored network parameterization and use a simple denoising objective. Our empirical evaluation demonstrates that ILMs outperform both ARMs and MDMs on common planning tasks. Furthermore, we show that ILMs outperform MDMs and perform on par with ARMs in an unconditional text generation task while offering greater flexibility than MDMs in arbitrary-length text infilling.
Problem

Research questions and friction points this paper is trying to address.

Autoregressive models struggle with out-of-order sequence generation.
Masked Diffusion Models fail in handling arbitrary infilling constraints.
Insertion Language Models enable flexible token insertion at any position.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Insert tokens at arbitrary positions in sequences
Select both position and vocabulary element jointly
Train with tailored network and denoising objective
🔎 Similar Papers
No similar papers found.