VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning

📅 2025-01-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the joint challenges of privacy preservation, process verifiability, and robustness against malicious attacks (e.g., model poisoning, inference attacks) in blockchain-based federated learning, this paper proposes the first trustless, on-chain verifiable federated learning framework—requiring no trusted third party. Methodologically, it pioneers the tight integration of zk-SNARKs with incremental verifiable computation (IVC), enabling fine-grained, efficient verification across the entire workflow—from local model training to global aggregation. Privacy is strengthened via differential privacy, while contribution accountability is ensured through auditable smart contracts. Experimentally, the framework achieves sub-81-second local proof generation, under-2-second aggregation proof generation, and sub-0.6-second on-chain verification. These results demonstrate substantial practicality gains without compromising security or decentralization.

Technology Category

Application Category

📝 Abstract
Blockchain-based Federated Learning (FL) is an emerging decentralized machine learning paradigm that enables model training without relying on a central server. Although some BFL frameworks are considered privacy-preserving, they are still vulnerable to various attacks, including inference and model poisoning. Additionally, most of these solutions employ strong trust assumptions among all participating entities or introduce incentive mechanisms to encourage collaboration, making them susceptible to multiple security flaws. This work presents VerifBFL, a trustless, privacy-preserving, and verifiable federated learning framework that integrates blockchain technology and cryptographic protocols. By employing zero-knowledge Succinct Non-Interactive Argument of Knowledge (zk-SNARKs) and incrementally verifiable computation (IVC), VerifBFL ensures the verifiability of both local training and aggregation processes. The proofs of training and aggregation are verified on-chain, guaranteeing the integrity and auditability of each participant's contributions. To protect training data from inference attacks, VerifBFL leverages differential privacy. Finally, to demonstrate the efficiency of the proposed protocols, we built a proof of concept using emerging tools. The results show that generating proofs for local training and aggregation in VerifBFL takes less than 81s and 2s, respectively, while verifying them on-chain takes less than 0.6s.
Problem

Research questions and friction points this paper is trying to address.

Blockchain-based Federated Learning
Privacy Protection
Model Validation and Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

zk-SNARKs
Differential Privacy
Verifiable Federated Learning
🔎 Similar Papers
No similar papers found.
Ahmed Ayoub Bellachia
Ahmed Ayoub Bellachia
Research Engineer
AI SecurityFederated LearningBlockchainApplied Cryptography
Mouhamed Amine Bouchiha
Mouhamed Amine Bouchiha
PostDoc, Institut Mines-Télécom, SudParis
TrustPrivacyBlockchainsFederated LearningLLMs
Y
Yacine Ghamri-Doudane
L3I, University of La Rochelle, La Rochelle, France
M
Mourad Rabah
L3I, University of La Rochelle, La Rochelle, France