XLM: A Python package for non-autoregressive language models

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-autoregressive (NAR) language models have long suffered from a lack of unified training and inference frameworks, resulting in highly customized data preprocessing, loss design, and decoding strategies—hindering fair comparison and reproducibility. To address this, we introduce the first modular, open-source toolkit tailored for lightweight NAR models. It provides standardized preprocessing pipelines, dedicated loss functions (e.g., Conditional Masked Language Modeling), parallel decoding mechanisms, and a Hugging Face–style API. This framework fills a critical gap in the NAR ecosystem by establishing a standardized, accessible foundation that substantially lowers the barrier to entry. Complementing the toolkit, we release a family of pre-trained models across multiple scales, enabling reproducible benchmarking. The codebase is publicly available and has been widely adopted by the research community, accelerating efficient development and equitable evaluation of NAR models.

Technology Category

Application Category

📝 Abstract
In recent years, there has been a resurgence of interest in non-autoregressive text generation in the context of general language modeling. Unlike the well-established autoregressive language modeling paradigm, which has a plethora of standard training and inference libraries, implementations of non-autoregressive language modeling have largely been bespoke making it difficult to perform systematic comparisons of different methods. Moreover, each non-autoregressive language model typically requires it own data collation, loss, and prediction logic, making it challenging to reuse common components. In this work, we present the XLM python package, which is designed to make implementing small non-autoregressive language models faster with a secondary goal of providing a suite of small pre-trained models (through a companion xlm-models package) that can be used by the research community. The code is available at https://github.com/dhruvdcoder/xlm-core.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of standardized libraries for non-autoregressive language models
Simplifies implementation and comparison of diverse non-autoregressive methods
Provides reusable components and pre-trained models for research community
Innovation

Methods, ideas, or system contributions that make the work stand out.

Python package for non-autoregressive language models
Standardizes training and inference for systematic comparisons
Provides reusable components and pre-trained small models
🔎 Similar Papers
No similar papers found.