ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In distributed untrusted training settings, verifying LoRA weight compatibility with open-source base models, protecting intellectual property, and proving inference validity without parameter leakage remain fundamental challenges. To address these, this paper proposes the first LoRA-based collaborative inference framework supporting module-level zero-knowledge verification. By integrating zk-SNARKs with secure multi-party computation, we design a lightweight, customized arithmetic circuit and a Multi-Party Inference protocol that enables deterministic, privacy-preserving verification of compatibility, validity, and provenance—without revealing private LoRA parameters. Experimental evaluation on mainstream large language models demonstrates millisecond-scale verification latency (end-to-end 1–2 seconds), provides 100% correctness guarantees, and robustly supports cross-institutional, contract-governed training and secure collaboration.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) is a widely adopted method for customizing large-scale language models. In distributed, untrusted training environments, an open source base model user may want to use LoRA weights created by an external contributor, leading to two requirements: (1) the base model user must confirm that the LoRA weights are effective when paired with the intended base model, and (2) the LoRA contributor must keep their proprietary weights private until compensation is assured. We present ZKLoRA, a zero-knowledge verification protocol that relies on succinct proofs and our novel Multi-Party Inference procedure to verify LoRA-base model compatibility without exposing LoRA weights. ZKLoRA produces deterministic correctness guarantees and validates each LoRA module in only 1-2 seconds on state-of-the-art large language models. This low-latency approach enables nearly real-time verification and promotes secure collaboration among geographically decentralized teams and contract-based training pipelines. The protocol ensures that the delivered LoRA module works as claimed, safeguarding the contributor's intellectual property while providing the base model user with verification of compatibility and lineage.
Problem

Research questions and friction points this paper is trying to address.

LoRA Compatibility
Intellectual Property Protection
Effectiveness Verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

ZKLoRA
Zero-Knowledge Proofs
LoRA Compatibility
🔎 Similar Papers
No similar papers found.