🤖 AI Summary
To address the challenge of verifying large language model (LLM) inference outputs corrupted by numerical noise, this paper proposes two verifiable inference mechanisms: Token-DiFR and Activation-DiFR. Methodologically, it introduces seed-synchronized constraints—enabling precise characterization of the admissible output space, wherein the output itself serves as zero-overhead audit evidence—and designs an activation fingerprinting scheme based on random orthogonal projection to compress intermediate representations, drastically reducing verification communication and computation overhead. Experimentally, Token-DiFR achieves AUC > 0.999 in detecting 4-bit quantization errors within 300 tokens; Activation-DiFR attains equivalent detection accuracy using only two tokens, cutting communication costs by 25–75%. The proposed frameworks deliver efficient, low-overhead, and highly robust verification of LLM inference.
📝 Abstract
As demand for LLM inference grows, it is becoming increasingly important that providers and their customers can verify that inference processes are performed correctly, without errors or tampering. However, re-running the same inference process twice often leads to different results due to benign numerical noise, making it difficult to distinguish legitimate variation from actual problems. To address this problem, we introduce Token-DiFR (Token-Divergence-From-Reference), a method for verifying inference outputs by comparing generated tokens against predictions made by a trusted reference implementation conditioned on the same random seed. Sampling seed synchronization tightly constrains valid outputs, leaving providers minimal room to deviate from correct inference, which allows output tokens themselves to serve as auditable evidence of correctness at zero additional cost to the provider. Token-DiFR reliably identifies sampling errors, simulated bugs, and model quantization, detecting 4-bit quantization with AUC $>$ 0.999 within 300 output tokens. For applications requiring sample-efficient forward-pass verification, we additionally introduce Activation-DiFR, a scheme that uses random orthogonal projections to compress activations into compact fingerprints for subsequent verification. Activation-DiFR detects 4-bit quantization with AUC $>$ 0.999 using just 2 output tokens, while reducing communication overhead by 25-75% relative to existing methods. We release an open-source integration with vLLM to accelerate practical deployment of verifiable inference.