Verifiability and Privacy in Federated Learning through Context-Hiding Multi-Key Homomorphic Authenticators

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, malicious aggregators may tamper with model updates, compromise privacy, and evade client-side verification. To address this, we propose a verifiable secure aggregation protocol. Our method introduces a context-hiding multi-key homomorphic authenticator, enabling clients to efficiently verify the integrity and correctness of aggregated results without revealing their local model updates. By integrating secure aggregation with linearly homomorphic authentication, the protocol ensures verifiability of aggregation operations directly over encrypted data, achieving low computational overhead and high scalability. Experimental evaluations demonstrate that the approach supports large-scale neural networks with millions of parameters, exhibiting low verification latency and significantly reduced communication and computation costs compared to state-of-the-art schemes.

Technology Category

Application Category

📝 Abstract
Federated Learning has rapidly expanded from its original inception to now have a large body of research, several frameworks, and sold in a variety of commercial offerings. Thus, its security and robustness is of significant importance. There are many algorithms that provide robustness in the case of malicious clients. However, the aggregator itself may behave maliciously, for example, by biasing the model or tampering with the weights to weaken the models privacy. In this work, we introduce a verifiable federated learning protocol that enables clients to verify the correctness of the aggregators computation without compromising the confidentiality of their updates. Our protocol uses a standard secure aggregation technique to protect individual model updates with a linearly homomorphic authenticator scheme that enables efficient, privacy-preserving verification of the aggregated result. Our construction ensures that clients can detect manipulation by the aggregator while maintaining low computational overhead. We demonstrate that our approach scales to large models, enabling verification over large neural networks with millions of parameters.
Problem

Research questions and friction points this paper is trying to address.

Ensuring verifiability of aggregator computation in federated learning
Protecting privacy of client updates during verification process
Detecting malicious aggregator manipulation without high overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-key homomorphic authenticators for verifiable aggregation
Context-hiding scheme preserves client update confidentiality
Efficient verification scales to million-parameter neural networks
🔎 Similar Papers
No similar papers found.