FedTrident: Resilient Road Condition Classification Against Poisoning Attacks in Federated Learning

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of federated learning in vehicle-infrastructure cooperative road condition classification to targeted label-flipping attacks, which jeopardize model performance and traffic safety. To counter this threat, the authors propose FedTrident, a novel framework that introduces, for the first time, a neuron-level anomaly detection mechanism specifically designed for such attacks. FedTrident employs Gaussian mixture model clustering to identify malicious clients and incorporates an adaptive historical behavior scoring scheme to dynamically exclude attackers. Furthermore, it integrates machine unlearning techniques to repair the global model contaminated by adversarial updates. Experimental results demonstrate that FedTrident achieves performance close to the attack-free baseline across diverse attack scenarios, outperforming eight state-of-the-art baselines by 9.49% and 4.47% on key metrics, while exhibiting strong robustness against varying proportions of malicious clients, data heterogeneity, and dynamic attack patterns.

Technology Category

Application Category

📝 Abstract
FL has emerged as a transformative paradigm for ITS, notably camera-based Road Condition Classification (RCC). However, by enabling collaboration, FL-based RCC exposes the system to adversarial participants launching Targeted Label-Flipping Attacks (TLFAs). Malicious clients (vehicles) can relabel their local training data (e.g., from an actual uneven road to a wrong smooth road), consequently compromising global model predictions and jeopardizing transportation safety. Existing countermeasures against such poisoning attacks fail to maintain resilient model performance near the necessary attack-free levels in various attack scenarios due to: 1) not tailoring poisoned local model detection to TLFAs, 2) not excluding malicious vehicular clients based on historical behavior, and 3) not remedying the already-corrupted global model after exclusion. To close this research gap, we propose FedTrident, which introduces: 1) neuron-wise analysis for local model misbehavior detection (notably including attack goal identification, critical feature extraction, and GMM-based model clustering and filtering); 2) adaptive client rating for client exclusion according to the local model detection results in each FL round; and 3) machine unlearning for corrupted global model remediation once malicious clients are excluded during FL. Extensive evaluation across diverse FL-RCC models, tasks, and configurations demonstrates that FedTrident can effectively thwart TLFAs, achieving performance comparable to that in attack-free scenarios and outperforming eight baseline countermeasures by 9.49% and 4.47% for the two most critical metrics. Moreover, FedTrident is resilient to various malicious client rates, data heterogeneity levels, complicated multi-task, and dynamic attacks.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Road Condition Classification
Poisoning Attacks
Targeted Label-Flipping Attacks
Intelligent Transportation Systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Poisoning Attacks
Neuron-wise Analysis
Adaptive Client Rating
Machine Unlearning
🔎 Similar Papers
No similar papers found.
Sheng Liu
Sheng Liu
KTH Royal Institute of Technology
trustworthy AIfederated learningsecurity and privacyintelligent transportation
P
Panagiotis Papadimitratos
Networked Systems Security Group, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden