🤖 AI Summary
This work investigates why diffusion-based large language models (Diffusion LLMs) exhibit enhanced performance on complex reasoning tasks when generating an increased number of end-of-sequence (EoS) tokens. We propose and empirically validate the “EoS-by-EoS thinking” hypothesis, demonstrating that the hidden states of EoS tokens implicitly function as a reasoning space that encodes critical problem-solving information. Through controlled prompting experiments, counterfactual patching interventions, and evaluations across diverse reasoning tasks—including arithmetic addition, entity tracking, and Sudoku—on models such as LLaDA1.5, LLaDA2.0-mini, and Dream-v0, we show that augmenting EoS token count significantly improves reasoning accuracy. Moreover, targeted perturbations to EoS hidden states systematically alter model outputs, challenging the conventional view that EoS tokens serve merely as termination signals.
📝 Abstract
Diffusion LLMs have been proposed as an alternative to autoregressive LLMs, excelling especially at complex reasoning tasks with interdependent sub-goals. Curiously, this is particularly true if the generation length, i.e., the number of tokens the model has to output, is set to a much higher value than is required for providing the correct answer to the task, and the model pads its answer with end-of-sequence (EoS) tokens. We hypothesize that diffusion models think EoS-by-EoS, that is, they use the representations of EoS tokens as a hidden scratchpad, which allows them to solve harder reasoning problems. We experiment with the diffusion models LLaDA1.5, LLaDA2.0-mini, and Dream-v0 on the tasks Addition, Entity Tracking, and Sudoku. In a controlled prompting experiment, we confirm that adding EoS tokens improves the LLMs'reasoning capabilities. To further verify whether they serve as space for hidden computations, we patch the hidden states of the EoS tokens with those of a counterfactual generation, which frequently changes the generated output to the counterfactual. The success of the causal intervention underscores that the EoS tokens, which one may expect to be devoid of meaning, carry information on the problem to solve. The behavioral experiments and the causal interventions indicate that diffusion LLMs can indeed think EoS-by-EoS.