🤖 AI Summary
This work addresses the limitation of autoregressive (AR) models—namely, their inability to perform context-aware masked token prediction—while also mitigating the training and inference inefficiency inherent in masked language models (MLMs) and diffusion models. To this end, we propose MARIA, an architecture that seamlessly integrates pretrained MLM and AR model hidden states without modifying the AR backbone. MARIA employs a lightweight linear decoder for high-fidelity masked token prediction and leverages the AR model’s key-value (KV) cache to preserve inference efficiency. Crucially, MARIA is the first method to endow standard AR models with both efficient and high-accuracy masked infilling capability. Experiments demonstrate that MARIA significantly outperforms baselines—including discrete diffusion models—on masked filling tasks, achieving superior accuracy while maintaining low latency.
📝 Abstract
Historically, LLMs have been trained using either autoregressive (AR) or masked language modeling (MLM) objectives, with AR models gaining dominance in recent years. However, AR models are inherently incapable of masked infilling, which is the ability to predict masked tokens between past and future context. In contrast, MLM models suffer from intrinsic computational inefficiencies during both training and inference that hinder their scalability. This work introduces MARIA (Masked and Autoregressive Infilling Architecture), a novel approach that leverages the strengths of both paradigms to achieve state-of-the-art masked infilling performance. MARIA combines a pre-trained MLM and AR model by training a linear decoder that takes their concatenated hidden states as input. This minimal modification enables the AR model to perform infilling while retaining its inherent advantages in terms of faster inference with KV caching. Our results demonstrate that MARIA significantly outperforms existing methods, namely discrete diffusion models, on masked infilling tasks.