๐ค AI Summary
Existing diffusion language models are constrained by positional bias, limiting their ability to leverage the full potential of non-autoregressive generation with arbitrary token ordering. This work introduces frequency-domain analysis into this domain for the first time, revealing through Fourier transforms that low-frequency components in hidden states encode global structural information while high-frequency components capture local details. Building on this insight, the authors propose a frequency-guided โstructure-to-detailโ generation paradigm, dynamically modulating spectral content during decoding via a sliding window mechanism. This approach transcends the limitations of conventional sequential generation, achieving relative performance improvements of 20.4% and 16.0% on the LLADA and SDAR benchmarks, respectively, and significantly outperforming the autoregressive model Llama3.1-8B-Instruct of comparable scale.
๐ Abstract
Despite the non-autoregressive potential of diffusion language models (dLLMs), existing decoding strategies demonstrate positional bias, failing to fully unlock the potential of arbitrary generation. In this work, we delve into the inherent spectral characteristics of dLLMs and present the first frequency-domain analysis showing that low-frequency components in hidden states primarily encode global structural information and long-range dependencies, while high-frequency components are responsible for characterizing local details. Based on this observation, we propose FourierSampler, which leverages a frequency-domain sliding window mechanism to dynamically guide the model to achieve a"structure-to-detail"generation. FourierSampler outperforms other inference enhancement strategies on LLADA and SDAR, achieving relative improvements of 20.4% on LLaDA1.5-8B and 16.0% on LLaDA-8B-Instruct. It notably surpasses similarly sized autoregressive models like Llama3.1-8B-Instruct.