Revisiting Judge Decoding from First Principles via Training-Free Distributional Divergence

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of Judge Decoding, which relies on expensive and noisy supervised signals that hinder its efficiency and generalization. By reasoning from first principles, the study reveals that the core scoring mechanism fundamentally depends on the divergence between the output distributions of the draft and target models, and for the first time establishes a theoretical connection between linear discriminators and KL divergence. Building on this insight, the authors propose a training-free validation mechanism that directly computes distributional discrepancies using logits, thereby eliminating the need for any supervisory signal. The method matches or even surpasses trained discriminators such as AutoJudge across multiple reasoning and code generation benchmarks, achieving substantial gains in computational efficiency and cross-domain robustness.

Technology Category

Application Category

📝 Abstract
Judge Decoding accelerates LLM inference by relaxing the strict verification of Speculative Decoding, yet it typically relies on expensive and noisy supervision. In this work, we revisit this paradigm from first principles, revealing that the ``criticality''scores learned via costly supervision are intrinsically encoded in the draft-target distributional divergence. We theoretically prove a structural correspondence between learned linear judges and Kullback-Leibler (KL) divergence, demonstrating they rely on the same underlying logit primitives. Guided by this, we propose a simple, training-free verification mechanism based on KL divergence. Extensive experiments across reasoning and coding benchmarks show that our method matches or outperforms complex trained judges (e.g., AutoJudge), offering superior robustness to domain shifts and eliminating the supervision bottleneck entirely.
Problem

Research questions and friction points this paper is trying to address.

Judge Decoding
Speculative Decoding
supervision bottleneck
distributional divergence
LLM inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Judge Decoding
training-free
distributional divergence
KL divergence
speculative decoding
🔎 Similar Papers
No similar papers found.