Zero-Knowledge Proof Based Verifiable Inference of Models

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A fundamental tension exists between protecting AI model intellectual property and enabling verifiable inference. Existing zero-knowledge (ZK) verification approaches for neural networks support only limited operators, failing to cover full-layer semantics—including matrix multiplication, LayerNorm, Softmax, and SiLU—thus hindering end-to-end verifiable deep learning. Method: We propose the first ZK verification framework supporting complete neural network layers. It builds upon a trusted-setup-free, recursively composable zkSNARK architecture and employs the Fiat–Shamir heuristic to yield succinct non-interactive proofs. Our framework unifies circuit modeling for both linear and nonlinear layers and enables efficient proof generation. Contribution/Results: We instantiate our framework as ZK-DeepSeek—a fully verifiable variant of DeepSeek—demonstrating practical proof efficiency and flexibility under realistic workloads. This work establishes the first end-to-end, parameter-hiding verification protocol for deep learning inference, advancing both verifiable AI and model IP protection toward practical deployment.

Technology Category

Application Category

📝 Abstract
Recent advances in artificial intelligence (AI), particularly deep learning, have led to widespread adoption across various applications. Yet, a fundamental challenge persists: how can we verify the correctness of AI model inference when model owners cannot (or will not) reveal their parameters? These parameters represent enormous training costs and valuable intellectual property, making transparent verification difficult. In this paper, we introduce a zero-knowledge framework capable of verifying deep learning inference without exposing model internal parameters. Built on recursively composed zero-knowledge proofs and requiring no trusted setup, our framework supports both linear and nonlinear neural network layers, including matrix multiplication, normalization, softmax, and SiLU. Leveraging the Fiat-Shamir heuristic, we obtain a succinct non-interactive argument of knowledge (zkSNARK) with constant-size proofs. To demonstrate the practicality of our approach, we translate the DeepSeek model into a fully SNARK-verifiable version named ZK-DeepSeek and show experimentally that our framework delivers both efficiency and flexibility in real-world AI verification workloads.
Problem

Research questions and friction points this paper is trying to address.

Verifying AI model inference without exposing proprietary parameters
Enabling trust in deep learning outputs through zero-knowledge proofs
Creating efficient verifiable neural networks including nonlinear operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-knowledge proofs verify AI inference correctness
Framework supports linear and nonlinear neural network layers
SNARK-verifiable DeepSeek model demonstrates practical efficiency
🔎 Similar Papers
No similar papers found.