🤖 AI Summary
This work addresses the limitations of conventional confidence-based attribution in discrete diffusion large language models (dLLMs)—namely, signal redundancy and insufficient fine-grained discriminability. We propose a decoding-trajectory-based model fingerprinting method. Our approach comprises two core innovations: (1) a Directed Decoding Graph (DDG) that explicitly models structural dependencies across multi-step decoding; and (2) Gaussian Trajectory Attribution (GTA), which fits token-level Gaussian distributions to compute likelihood-based, position-sensitive attribution scores. Leveraging the bidirectional decoding property of dLLMs, our method integrates structured trajectory analysis with probabilistic modeling. Extensive experiments demonstrate significant improvements over baselines across diverse scenarios—including cross-model identification, checkpoint differentiation, and fine-tuned variant detection—achieving notably higher attribution accuracy. This work establishes a novel paradigm for enhancing traceability and copyright protection in dLLMs.
📝 Abstract
Discrete Diffusion Large Language Models (dLLMs) have recently emerged as a competitive paradigm for non-autoregressive language modeling. Their distinctive decoding mechanism enables faster inference speed and strong performance in code generation and mathematical tasks. In this work, we show that the decoding mechanism of dLLMs not only enhances model utility but also can be used as a powerful tool for model attribution. A key challenge in this problem lies in the diversity of attribution scenarios, including distinguishing between different models as well as between different checkpoints or backups of the same model. To ensure broad applicability, we identify two fundamental problems: what information to extract from the decoding trajectory, and how to utilize it effectively. We first observe that relying directly on per-step model confidence yields poor performance. This is mainly due to the bidirectional decoding nature of dLLMs: each newly decoded token influences the confidence of other decoded tokens, making model confidence highly redundant and washing out structural signal regarding decoding order or dependencies. To overcome this, we propose a novel information extraction scheme called the Directed Decoding Map (DDM), which captures structural relationships between decoding steps and better reveals model-specific behaviors. Furthermore, to make full use of the extracted structural information during attribution, we propose Gaussian-Trajectory Attribution (GTA), where we fit a cell-wise Gaussian distribution at each decoding position for each target model, and define the likelihood of a trajectory as the attribution score: if a trajectory exhibits higher log-likelihood under the distribution of a specific model, it is more likely to have been generated by that model. Extensive experiments under different settings validate the utility of our methods.