🤖 AI Summary
Federated learning (FL) is vulnerable to Byzantine attacks, as the central server cannot verify the integrity of clients’ local training processes; moreover, existing data-driven defenses struggle to distinguish malicious model updates from benign discrepancies caused by non-IID data distributions, resulting in high false-positive rates. To address this, we propose Sentinel—the first FL security framework that tightly integrates remote attestation with trusted execution environments (TEEs). Sentinel achieves verifiable training via fine-grained code instrumentation and control-flow monitoring, while enforcing runtime integrity checks on critical variables within the TEE and generating cryptographically signed remote attestation reports. These mechanisms jointly ensure the authenticity and integrity of model updates. Evaluated on resource-constrained IoT devices, Sentinel incurs minimal overhead while substantially reducing false positives and significantly enhancing the reliability and security of global model aggregation.
📝 Abstract
Federated Learning (FL) has gained significant attention for its privacy-preserving capabilities, enabling distributed devices to collaboratively train a global model without sharing raw data. However, its distributed nature forces the central server to blindly trust the local training process and aggregate uncertain model updates, making it susceptible to Byzantine attacks from malicious participants, especially in mission-critical scenarios. Detecting such attacks is challenging due to the diverse knowledge across clients, where variations in model updates may stem from benign factors, such as non-IID data, rather than adversarial behavior. Existing data-driven defenses struggle to distinguish malicious updates from natural variations, leading to high false positive rates and poor filtering performance.
To address this challenge, we propose Sentinel, a remote attestation (RA)-based scheme for FL systems that regains client-side transparency and mitigates Byzantine attacks from a system security perspective. Our system employs code instrumentation to track control-flow and monitor critical variables in the local training process. Additionally, we utilize a trusted training recorder within a Trusted Execution Environment (TEE) to generate an attestation report, which is cryptographically signed and securely transmitted to the server. Upon verification, the server ensures that legitimate client training processes remain free from program behavior violation or data manipulation, allowing only trusted model updates to be aggregated into the global model. Experimental results on IoT devices demonstrate that Sentinel ensures the trustworthiness of the local training integrity with low runtime and memory overhead.