🤖 AI Summary
This work addresses the limitations of Judge Decoding, which relies on expensive and noisy supervised signals that hinder its efficiency and generalization. By reasoning from first principles, the study reveals that the core scoring mechanism fundamentally depends on the divergence between the output distributions of the draft and target models, and for the first time establishes a theoretical connection between linear discriminators and KL divergence. Building on this insight, the authors propose a training-free validation mechanism that directly computes distributional discrepancies using logits, thereby eliminating the need for any supervisory signal. The method matches or even surpasses trained discriminators such as AutoJudge across multiple reasoning and code generation benchmarks, achieving substantial gains in computational efficiency and cross-domain robustness.
📝 Abstract
Judge Decoding accelerates LLM inference by relaxing the strict verification of Speculative Decoding, yet it typically relies on expensive and noisy supervision. In this work, we revisit this paradigm from first principles, revealing that the ``criticality''scores learned via costly supervision are intrinsically encoded in the draft-target distributional divergence. We theoretically prove a structural correspondence between learned linear judges and Kullback-Leibler (KL) divergence, demonstrating they rely on the same underlying logit primitives. Guided by this, we propose a simple, training-free verification mechanism based on KL divergence. Extensive experiments across reasoning and coding benchmarks show that our method matches or outperforms complex trained judges (e.g., AutoJudge), offering superior robustness to domain shifts and eliminating the supervision bottleneck entirely.