DiFR: Inference Verification Despite Nondeterminism

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of verifying large language model (LLM) inference outputs corrupted by numerical noise, this paper proposes two verifiable inference mechanisms: Token-DiFR and Activation-DiFR. Methodologically, it introduces seed-synchronized constraints—enabling precise characterization of the admissible output space, wherein the output itself serves as zero-overhead audit evidence—and designs an activation fingerprinting scheme based on random orthogonal projection to compress intermediate representations, drastically reducing verification communication and computation overhead. Experimentally, Token-DiFR achieves AUC > 0.999 in detecting 4-bit quantization errors within 300 tokens; Activation-DiFR attains equivalent detection accuracy using only two tokens, cutting communication costs by 25–75%. The proposed frameworks deliver efficient, low-overhead, and highly robust verification of LLM inference.

Technology Category

Application Category

📝 Abstract
As demand for LLM inference grows, it is becoming increasingly important that providers and their customers can verify that inference processes are performed correctly, without errors or tampering. However, re-running the same inference process twice often leads to different results due to benign numerical noise, making it difficult to distinguish legitimate variation from actual problems. To address this problem, we introduce Token-DiFR (Token-Divergence-From-Reference), a method for verifying inference outputs by comparing generated tokens against predictions made by a trusted reference implementation conditioned on the same random seed. Sampling seed synchronization tightly constrains valid outputs, leaving providers minimal room to deviate from correct inference, which allows output tokens themselves to serve as auditable evidence of correctness at zero additional cost to the provider. Token-DiFR reliably identifies sampling errors, simulated bugs, and model quantization, detecting 4-bit quantization with AUC $>$ 0.999 within 300 output tokens. For applications requiring sample-efficient forward-pass verification, we additionally introduce Activation-DiFR, a scheme that uses random orthogonal projections to compress activations into compact fingerprints for subsequent verification. Activation-DiFR detects 4-bit quantization with AUC $>$ 0.999 using just 2 output tokens, while reducing communication overhead by 25-75% relative to existing methods. We release an open-source integration with vLLM to accelerate practical deployment of verifiable inference.
Problem

Research questions and friction points this paper is trying to address.

Verifying LLM inference correctness despite nondeterministic numerical noise
Detecting sampling errors and model quantization through token comparison
Reducing communication overhead for sample-efficient forward-pass verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-DiFR compares generated tokens with reference predictions
Activation-DiFR compresses activations into compact fingerprints
Both methods use synchronized random seeds for verification
🔎 Similar Papers
No similar papers found.
Adam Karvonen
Adam Karvonen
ML Researcher
Machine Learning
D
Daniel Reuter
ML Alignment and Theory Scholars (MATS)
R
Roy Rinberg
ML Alignment and Theory Scholars (MATS), Harvard University
L
Luke Marks
ML Alignment and Theory Scholars (MATS)
Adrià Garriga-Alonso
Adrià Garriga-Alonso
Research Scientist, FAR AI
AI safetyinterpretability
K
Keri Warr
Anthropic