Training Optimal Large Diffusion Language Models

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models (DLMs) lack systematic scaling laws, hindering efficient resource allocation and performance prediction. Method: We propose Quokka—the first unified scaling law for DLMs—extending the Chinchilla scaling paradigm to diffusion modeling while jointly accounting for both compute- and data-constrained regimes. We introduce the first empirical scaling analysis framework tailored to DLMs, quantifying how key design factors—including denoising strategy, training steps, dataset size, and model parameters—affect performance and efficiency. Contribution/Results: Quokka accurately predicts model performance across diverse configurations and enables principled resource allocation, yielding substantial training efficiency gains without sacrificing accuracy. It provides reproducible, generalizable optimization principles for large-scale DLM training, achieving Pareto improvements in both efficiency and performance.

Technology Category

Application Category

📝 Abstract
We introduce Quokka, the first systematic scaling law for diffusion language models (DLMs), encompassing both compute-constrained and data-constrained regimes, and studying the key modeling and optimization designs. Quokka is a good friend of Chinchilla and provides wider scopes. We hope the results would bring short-term practical guidance in DLMs training and long-term inspirations for the whole AI community.
Problem

Research questions and friction points this paper is trying to address.

Establishing scaling laws for diffusion language models
Optimizing model design under compute and data constraints
Providing training guidance for diffusion language model development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Quokka scaling law for diffusion models
Encompasses compute and data constrained regimes
Studies key modeling and optimization designs
🔎 Similar Papers