ExclaveFL: Providing Transparency to Federated Learning using Exclaves

📅 2024-12-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), malicious clients can arbitrarily deviate from the training protocol, and such deviations are difficult to detect; existing TEE-based defenses are vulnerable to side-channel attacks, leading to secret key leakage and identity spoofing. Method: This paper proposes an end-to-end verifiable execution assurance mechanism tailored for FL. It introduces *exclaves*—a novel hardware security abstraction that guarantees integrity (without requiring confidentiality) and enables fine-grained, runtime, hardware-enforced attestation. Based on exclaves, we design a lightweight TEE, an attestation report generation module, and a certified dataflow graph compliance verification framework. Contribution/Results: Our implementation incurs less than 9% overhead and effectively detects diverse FL protocol deviation attacks—including model poisoning, gradient manipulation, and client impersonation—while significantly enhancing training trustworthiness and auditability.

Technology Category

Application Category

📝 Abstract
In federated learning (FL), data providers jointly train a model without disclosing their training data. Despite its privacy benefits, a malicious data provider can simply deviate from the correct training protocol without being detected, thus attacking the trained model. While current solutions have explored the use of trusted execution environment (TEEs) to combat such attacks, there is a mismatch with the security needs of FL: TEEs offer confidentiality guarantees, which are unnecessary for FL and make them vulnerable to side-channel attacks, and focus on coarse-grained attestation, which does not capture the execution of FL training. We describe ExclaveFL, an FL platform that achieves end-to-end transparency and integrity for detecting attacks. ExclaveFL achieves this by employing a new hardware security abstraction, exclaves, which focus on integrity-only guarantees. ExclaveFL uses exclaves to protect the execution of FL tasks, while generating signed statements containing fine-grained, hardware-based attestation reports of task execution at runtime. ExclaveFL then enables auditing using these statements to construct an attested dataflow graph and then check that the FL training jobs satisfies claims, such as the absence of attacks. Our experiments show that ExclaveFL introduces a less than 9% overhead while detecting a wide-range of attacks.
Problem

Research questions and friction points this paper is trying to address.

Detecting deviations in federated learning protocols
Preventing side-channel attacks on TEEs in FL
Ensuring integrity and transparency in FL training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses TEEs as exclaves without secrets
Attests data transformations at runtime
Forms attested dataflow graph for integrity
🔎 Similar Papers