🤖 AI Summary
In federated learning (FL), sharing performance metrics during evaluation may leak sensitive information. To address this, this work introduces zero-knowledge proofs (ZKPs) into the FL evaluation phase for the first time, proposing a verifiable privacy-preserving evaluation protocol that requires no trusted third party and relies neither on external APIs nor on raw data disclosure. The protocol employs a threshold-based verification circuit integrated with an FL simulation module, enabling secure validation of loss values and classification accuracy for CNN and MLP models on MNIST and HAR datasets—without revealing original data or intermediate loss values. Experimental results demonstrate that the scheme achieves strict privacy guarantees while incurring low communication overhead and controllable computational cost. Its feasibility, practicality, and scalability are empirically validated.
📝 Abstract
Federated Learning (FL) enables collaborative model training on decentralized data without exposing raw data. However, the evaluation phase in FL may leak sensitive information through shared performance metrics. In this paper, we propose a novel protocol that incorporates Zero-Knowledge Proofs (ZKPs) to enable privacy-preserving and verifiable evaluation for FL. Instead of revealing raw loss values, clients generate a succinct proof asserting that their local loss is below a predefined threshold. Our approach is implemented without reliance on external APIs, using self-contained modules for federated learning simulation, ZKP circuit design, and experimental evaluation on both the MNIST and Human Activity Recognition (HAR) datasets. We focus on a threshold-based proof for a simple Convolutional Neural Network (CNN) model (for MNIST) and a multi-layer perceptron (MLP) model (for HAR), and evaluate the approach in terms of computational overhead, communication cost, and verifiability.