Planned Diffusion

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a fundamental trade-off between inference speed and output quality. To address this, we propose a two-stage hybrid generation paradigm: first, a short, structured task plan is generated autoregressively to decompose long-text generation into semantically independent segments; second, a learnable planning mechanism guides a diffusion model to generate these segments in parallel. This approach innovatively combines the controllability of autoregressive decoding with the parallelism of diffusion models, achieving high output fidelity while significantly improving throughput. A runtime-tunable parameter enables fine-grained quality–latency trade-offs. Evaluated on AlpacaEval, our method achieves 1.27–1.81× speedup over baseline autoregressive inference, with only a marginal drop of 0.87–5.4% in human preference win rate. The results substantially extend the speed–quality Pareto frontier for LLM inference.

Technology Category

Application Category

📝 Abstract
A central challenge in large language model inference is the trade-off between generation speed and output quality. Autoregressive models produce high-quality text but generate tokens sequentially. Diffusion models can generate tokens in parallel but often need many iterations to match the same quality. We propose planned diffusion, a hybrid method that combines the strengths of both paradigms. Planned diffusion works in two stages: first, the model creates a short autoregressive plan that breaks the output into smaller, independent spans. Second, the model generates these spans simultaneously using diffusion. This approach expands the speed-quality Pareto frontier and provides a practical path to faster, high-quality text generation. On AlpacaEval, a suite of 805 instruction-following prompts, planned diffusion achieves Pareto-optimal trade-off between quality and latency, achieving 1.27x to 1.81x speedup over autoregressive generation with only 0.87% to 5.4% drop in win rate, respectively. Our sensitivity analysis shows that the planning mechanism of planned diffusion is minimal and reliable, and simple runtime knobs exist to provide flexible control of the quality-latency trade-off.
Problem

Research questions and friction points this paper is trying to address.

Balancing generation speed and output quality in language models
Overcoming sequential token generation limitations in autoregressive models
Reducing iteration requirements while maintaining quality in diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid method combining autoregressive and diffusion models
Two-stage generation with planning then parallel diffusion
Expands speed-quality Pareto frontier for text generation
🔎 Similar Papers
No similar papers found.