Mercury: Ultra-Fast Language Models Based on Diffusion

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow autoregressive decoding of large language models (LLMs), which hinders real-time programming applications, this paper introduces Mercury—the first production-grade diffusion-based LLM accelerator. Its core innovation lies in pioneering the integration of diffusion modeling into large-scale language modeling, enabling token-level parallel generation within a Transformer architecture and overcoming the fundamental sequential decoding bottleneck. We further present Mercury Coder, a code-generation-optimized variant with Mini and Small configurations; on H100 GPUs, it achieves throughputs of 1,109 and 737 tokens/sec, respectively—up to 10× faster than state-of-the-art acceleration baselines. In the Copilot Arena benchmark, Mercury Coder ranks first in speed and second in quality. The model, along with its API and interactive web platform, has been open-sourced, empirically validating the feasibility and superiority of the diffusion paradigm for efficient LLM inference.

Technology Category

Application Category

📝 Abstract
We present Mercury, a new generation of commercial-scale large language models (LLMs) based on diffusion. These models are parameterized via the Transformer architecture and trained to predict multiple tokens in parallel. In this report, we detail Mercury Coder, our first set of diffusion LLMs designed for coding applications. Currently, Mercury Coder comes in two sizes: Mini and Small. These models set a new state-of-the-art on the speed-quality frontier. Based on independent evaluations conducted by Artificial Analysis, Mercury Coder Mini and Mercury Coder Small achieve state-of-the-art throughputs of 1109 tokens/sec and 737 tokens/sec, respectively, on NVIDIA H100 GPUs and outperform speed-optimized frontier models by up to 10x on average while maintaining comparable quality. We discuss additional results on a variety of code benchmarks spanning multiple languages and use-cases as well as real-world validation by developers on Copilot Arena, where the model currently ranks second on quality and is the fastest model overall. We also release a public API at https://platform.inceptionlabs.ai/ and free playground at https://chat.inceptionlabs.ai
Problem

Research questions and friction points this paper is trying to address.

Develop ultra-fast diffusion-based language models for coding
Achieve state-of-the-art speed while maintaining model quality
Optimize parallel token prediction for commercial-scale LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based large language models
Parallel multi-token prediction training
State-of-the-art speed-quality performance
🔎 Similar Papers
No similar papers found.