Towards Verifiable AI with Lightweight Cryptographic Proofs of Inference

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient and verifiable mechanisms for large-model inference in cloud environments, where existing cryptographic proof methods incur prohibitive overhead. The authors propose a lightweight verifiable inference protocol that innovatively leverages the statistical separability of neural networks and random path sampling. By employing Merkle tree–based vector commitments to succinctly commit to execution traces, the protocol is further extended into a two-server adversarial verification framework. The approach dramatically reduces proof generation time—from minutes to milliseconds—and demonstrates empirical efficacy on ResNet-18 and Llama-2-7B. It remains robust against common adversarial strategies, making it suitable for large-scale deployment and audit scenarios.

Technology Category

Application Category

📝 Abstract
When large AI models are deployed as cloud-based services, clients have no guarantee that responses are correct or were produced by the intended model. Rerunning inference locally is infeasible for large models, and existing cryptographic proof systems -- while providing strong correctness guarantees -- introduce prohibitive prover overhead (e.g., hundreds of seconds per query for billion-parameter models). We present a verification framework and protocol that replaces full cryptographic proofs with a lightweight, sampling-based approach grounded in statistical properties of neural networks. We formalize the conditions under which trace separation between functionally dissimilar models can be leveraged to argue the security of verifiable inference protocols. The prover commits to the execution trace of inference via Merkle-tree-based vector commitments and opens only a small number of entries along randomly sampled paths from output to input. This yields a protocol that trades soundness for efficiency, a tradeoff well-suited to auditing, large-scale deployment settings where repeated queries amplify detection probability, and scenarios with rationally incentivized provers who face penalties upon detection. Our approach reduces proving times by several orders of magnitude compared to state-of-the-art cryptographic proof systems, going from the order of minutes to the order of milliseconds, with moderately larger proofs. Experiments on ResNet-18 classifiers and Llama-2-7B confirm that common architectures exhibit the statistical properties our protocol requires, and that natural adversarial strategies (gradient-descent reconstruction, inverse transforms, logit swapping) fail to produce traces that evade detection. We additionally present a protocol in the refereed delegation model, where two competing servers enable correct output identification in a logarithmic number of rounds.
Problem

Research questions and friction points this paper is trying to address.

Verifiable AI
Cryptographic Proofs
Inference Verification
Cloud-based AI Services
Model Authentication
Innovation

Methods, ideas, or system contributions that make the work stand out.

lightweight cryptographic proofs
verifiable inference
statistical trace separation
Merkle-tree commitments
refereed delegation
🔎 Similar Papers
No similar papers found.
P
Pranay Anchuri
Offchain Labs
Matteo Campanelli
Matteo Campanelli
Offchain Labs
CryptographyTheoretical Computer Science
P
Paul Cesaretti
CUNY Graduate Center
R
Rosario Gennaro
Offchain Labs; CUNY Graduate Center; City College of New York
T
Tushar M. Jois
CUNY Graduate Center; City College of New York
H
Hasan S. Kayman
City College of New York
T
Tugce Ozdemir
CUNY Graduate Center