Securing Private Federated Learning in a Malicious Setting: A Scalable TEE-Based Approach with Client Auditing

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing DP-FTRL-based private federated learning across devices relies on a semi-honest server and fails to withstand malicious scenarios such as client dropouts or compromises. Method: This paper proposes the first maliciously secure differentially private federated learning framework, featuring: (1) a minimal trusted computing base built upon a short-lived server-side trusted execution environment (TEE); (2) an interactive differentially private DP-FTRL protocol enabling verifiable proofs for server-side operations; and (3) a lightweight client-side auditing mechanism. Contribution/Results: We formally prove that the framework satisfies strict differential privacy under arbitrary malicious adversaries. Empirical evaluation demonstrates only constant-factor overhead in communication and computation, while achieving high scalability, fork-resilience, and system liveness.

Technology Category

Application Category

📝 Abstract
In cross-device private federated learning, differentially private follow-the-regularized-leader (DP-FTRL) has emerged as a promising privacy-preserving method. However, existing approaches assume a semi-honest server and have not addressed the challenge of securely removing this assumption. This is due to its statefulness, which becomes particularly problematic in practical settings where clients can drop out or be corrupted. While trusted execution environments (TEEs) might seem like an obvious solution, a straightforward implementation can introduce forking attacks or availability issues due to state management. To address this problem, our paper introduces a novel server extension that acts as a trusted computing base (TCB) to realize maliciously secure DP-FTRL. The TCB is implemented with an ephemeral TEE module on the server side to produce verifiable proofs of server actions. Some clients, upon being selected, participate in auditing these proofs with small additional communication and computational demands. This extension solution reduces the size of the TCB while maintaining the system's scalability and liveness. We provide formal proofs based on interactive differential privacy, demonstrating privacy guarantee in malicious settings. Finally, we experimentally show that our framework adds small constant overhead to clients in several realistic settings.
Problem

Research questions and friction points this paper is trying to address.

Securing private federated learning against malicious servers
Addressing state management challenges with TEE-based solutions
Providing verifiable privacy guarantees in malicious settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ephemeral TEE module for verifiable proofs
Client auditing with minimal overhead
Scalable maliciously secure DP-FTRL implementation
🔎 Similar Papers
No similar papers found.