ERIS: Enhancing Privacy and Communication Efficiency in Serverless Federated Learning

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously achieving communication efficiency, model accuracy, and privacy preservation in billion-parameter federated learning. To this end, we propose ERIS, a serverless framework that distributes both communication and computation loads across multiple client-side aggregators through model sharding and distributed shifted gradient compression. ERIS achieves convergence rates comparable to FedAvg without relying on cryptographic techniques or noise injection, while effectively mitigating mutual information leakage by increasing the number of aggregators. Experimental results demonstrate that ERIS attains FedAvg-level accuracy on both vision and large language model tasks, substantially reduces communication overhead, and enhances robustness against membership inference and reconstruction attacks.

Technology Category

Application Category

📝 Abstract
Scaling federated learning (FL) to billion-parameter models introduces critical trade-offs between communication efficiency, model accuracy, and privacy guarantees. Existing solutions often tackle these challenges in isolation, sacrificing accuracy or relying on costly cryptographic tools. We propose ERIS, a serverless FL framework that balances privacy and accuracy while eliminating the server bottleneck and distributing the communication load. ERIS combines a model partitioning strategy, distributing aggregation across multiple client-side aggregators, with a distributed shifted gradient compression mechanism. We theoretically prove that ERIS (i) converges at the same rate as FedAvg under standard assumptions, and (ii) bounds mutual information leakage inversely with the number of aggregators, enabling strong privacy guarantees with no accuracy degradation. Experiments across image and text tasks, including large language models, confirm that ERIS achieves FedAvg-level accuracy while substantially reducing communication cost and improving robustness to membership inference and reconstruction attacks, without relying on heavy cryptography or noise injection.
Problem

Research questions and friction points this paper is trying to address.

federated learning
privacy
communication efficiency
serverless
large-scale models
Innovation

Methods, ideas, or system contributions that make the work stand out.

serverless federated learning
model partitioning
distributed gradient compression
privacy-accuracy trade-off
mutual information leakage
🔎 Similar Papers
No similar papers found.