DLM-Scope: Mechanistic Interpretability of Diffusion Language Models via Sparse Autoencoders

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the current lack of mechanistic interpretability tools for diffusion language models (DLMs), which hinders understanding of their internal representations. We propose DLM-Scope, the first interpretability framework for DLMs based on Top-K sparse autoencoders (SAEs), successfully adapting SAEs to the diffusion setting for the first time. Inserting SAEs in early layers not only reduces cross-entropy loss but also enables more effective intervention across diffusion timesteps. Our analysis reveals that SAE-derived features guide token decoding order and remain stable during post-training phases. Experimental results demonstrate that the extracted features are highly human-interpretable, and SAE-based interventions outperform conventional language model steering methods, confirming the practical utility of our approach for analyzing and controlling DLMs.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) have become a standard tool for mechanistic interpretability in autoregressive large language models (LLMs), enabling researchers to extract sparse, human-interpretable features and intervene on model behavior. Recently, as diffusion language models (DLMs) have become an increasingly promising alternative to the autoregressive LLMs, it is essential to develop tailored mechanistic interpretability tools for this emerging class of models. In this work, we present DLM-Scope, the first SAE-based interpretability framework for DLMs, and demonstrate that trained Top-K SAEs can faithfully extract interpretable features. Notably, we find that inserting SAEs affects DLMs differently than autoregressive LLMs: while SAE insertion in LLMs typically incurs a loss penalty, in DLMs it can reduce cross-entropy loss when applied to early layers, a phenomenon absent or markedly weaker in LLMs. Additionally, SAE features in DLMs enable more effective diffusion-time interventions, often outperforming LLM steering. Moreover, we pioneer certain new SAE-based research directions for DLMs: we show that SAEs can provide useful signals for DLM decoding order; and the SAE features are stable during the post-training phase of DLMs. Our work establishes a foundation for mechanistic interpretability in DLMs and shows a great potential of applying SAEs to DLM-related tasks and algorithms.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Language Models
Mechanistic Interpretability
Sparse Autoencoders
Model Interpretability
Feature Extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
Sparse Autoencoders
Mechanistic Interpretability
Feature Intervention
Decoding Order
🔎 Similar Papers
No similar papers found.