Decoding Large Language Diffusion Models with Foreseeing Movement

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language diffusion models (LLDMs) exhibit high sensitivity of decoding performance to token generation order, yet existing heuristic methods model only local dependencies while neglecting long-range effects. To address this, we propose FDM, a foresight-driven decoding framework that jointly models local consistency and global planning, and introduces a search-based discrete optimization strategy to dynamically identify critical exploration nodes. Furthermore, we design FDM-A, a lightweight variant that applies deep path search only at a few key steps, balancing computational efficiency and decoding stability. Extensive experiments across multiple benchmarks and architectures demonstrate that FDM-A significantly improves the trade-off between inference quality and speed, consistently outperforming state-of-the-art heuristic approaches. Our results validate both the effectiveness and scalability of long-range decoding planning in LLDMs.

Technology Category

Application Category

📝 Abstract
Large Language Diffusion Models (LLDMs) benefit from a flexible decoding mechanism that enables parallelized inference and controllable generations over autoregressive models. Yet such flexibility introduces a critical challenge: inference performance becomes highly sensitive to the decoding order of tokens. Existing heuristic methods, however, focus mainly on local effects while overlooking long-term impacts. To address this limitation, we propose the Foreseeing Decoding Method (FDM), a novel approach that integrates both local and global considerations to unlock the full potential, employing a search-based strategy to enable effective optimization in discrete spaces. Furthermore, by analyzing the consistency of chosen tokens in the full decoding process, we develop a variant, FDM with Acceleration (FDM-A), which restricts deep exploration to critical steps identified as the exploration and balance circumantences. Extensive experiments across diverse benchmarks and model architectures validate the scalability of FDM and demonstrate the superior efficiency-performance trade-off achieved by FDM-A. Our work might potentially provide a principled step toward more powerful decoding methods for LLDMs.
Problem

Research questions and friction points this paper is trying to address.

Optimizes decoding order sensitivity in diffusion models
Integrates local and global token selection strategies
Enhances efficiency-performance trade-off via accelerated search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates local and global token decoding considerations
Uses search-based strategy for discrete space optimization
Accelerates by limiting deep exploration to critical steps
🔎 Similar Papers
No similar papers found.