T3D: Few-Step Diffusion Language Models via Trajectory Self-Distillation with Direct Discriminative Optimization

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models suffer from a significant degradation in generation quality under few-step inference, struggling to balance efficiency and performance. This work proposes a trajectory self-distillation framework that integrates Direct Discriminative Optimization (DDO) with a reverse KL divergence objective to guide the student model toward high-probability modes of the teacher model, enabling efficient and mode-focused knowledge distillation. The approach substantially narrows the performance gap between few-step and full-step decoding, outperforming existing few-step baselines and standard training strategies across multiple benchmarks. Notably, it achieves near full-step decoding performance even under stringent step constraints.

Technology Category

Application Category

📝 Abstract
Diffusion large language models (DLLMs) have the potential to enable fast text generation by decoding multiple tokens in parallel. However, in practice, their inference efficiency is constrained by the need for many refinement steps, while aggressively reducing the number of steps leads to a substantial degradation in generation quality. To alleviate this, we propose a trajectory self-distillation framework that improves few-step decoding by distilling the model's own generative trajectories. We incorporate Direct Discriminative Optimization (DDO), a reverse-KL objective that promotes mode-seeking distillation and encourages the student to concentrate on high-probability teacher modes. Across benchmarks, our approach consistently outperforms strong few-step baselines and standard training under tight step budgets. Although full-step decoding remains superior, we substantially narrow the gap, establishing a strong foundation towards practical few-step DLLMs. The source code is available at https://github.com/Tyrion58/T3D.
Problem

Research questions and friction points this paper is trying to address.

diffusion language models
few-step decoding
inference efficiency
generation quality
step reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

trajectory self-distillation
Direct Discriminative Optimization
few-step diffusion
diffusion language models
reverse-KL distillation
🔎 Similar Papers
No similar papers found.