FLAT: Latent-Driven Arbitrary-Target Backdoor Attacks in Federated Learning

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing backdoor attacks in federated learning (FL) rely on fixed, single-target triggers, exhibiting low flexibility and high detectability. To address this, we propose FLAT, the first framework leveraging a latent-code-driven conditional autoencoder to enable retraining-free, arbitrary-target backdoor attacks. FLAT generates visually adaptive, target-specific, and diverse stealthy triggers and dynamically injects multi-target backdoors into FL via gradient manipulation. By jointly optimizing attack success rate, stealthiness, and trigger diversity, FLAT achieves high attack efficacy under state-of-the-art defenses—including RFA, Krum, and Norm Clipping—demonstrating superior robustness and evasion capability against detection. This work establishes a novel paradigm for FL backdoor attack research.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) is vulnerable to backdoor attacks, yet most existing methods are limited by fixed-pattern or single-target triggers, making them inflexible and easier to detect. We propose FLAT (FL Arbitrary-Target Attack), a novel backdoor attack that leverages a latent-driven conditional autoencoder to generate diverse, target-specific triggers as needed. By introducing a latent code, FLAT enables the creation of visually adaptive and highly variable triggers, allowing attackers to select arbitrary targets without retraining and to evade conventional detection mechanisms. Our approach unifies attack success, stealth, and diversity within a single framework, introducing a new level of flexibility and sophistication to backdoor attacks in FL. Extensive experiments show that FLAT achieves high attack success and remains robust against advanced FL defenses. These results highlight the urgent need for new defense strategies to address latent-driven, multi-target backdoor threats in federated settings.
Problem

Research questions and friction points this paper is trying to address.

FL is vulnerable to fixed-pattern backdoor attacks
Existing methods lack flexibility and are easily detected
Need for diverse, target-specific triggers in FL attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent-driven conditional autoencoder for diverse triggers
Visually adaptive triggers evade detection mechanisms
Unifies attack success, stealth, and diversity
🔎 Similar Papers
No similar papers found.
T
Tuan Nguyen
VinUni-Illinois Smart Health Center, VinUniversity, Hanoi, Vietnam
Khoa D Doan
Khoa D Doan
VinUniversity
Generative ModelingInformation RetrievalComputational AdvertisingTrustworthy AI
K
Kok-Seng Wong
VinUni-Illinois Smart Health Center, VinUniversity, Hanoi, Vietnam