ARMOR: Adaptive Resilience Against Model Poisoning Attacks in Continual Federated Learning for Mobile Indoor Localization

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of device heterogeneity, dynamic environmental changes, and model poisoning attacks in continual federated learning for mobile indoor localization, which often degrade global model performance and distort learned representations. To tackle these issues, the authors propose ARMOR, a novel framework that, for the first time, integrates state space models (SSMs) into continual federated learning to dynamically model the historical evolution of global model weights. ARMOR further incorporates a bias-detection mechanism to adaptively identify and filter out anomalous local updates. Experimental results on real mobile devices in actual indoor environments demonstrate that ARMOR reduces average localization error by 8.0× and worst-case error by 4.97× compared to existing methods, significantly enhancing model robustness, environmental adaptability, and resilience against poisoning attacks.

Technology Category

Application Category

📝 Abstract
Indoor localization has become increasingly essential for applications ranging from asset tracking to delivering personalized services. Federated learning (FL) offers a privacy-preserving approach by training a centralized global model (GM) using distributed data from mobile devices without sharing raw data. However, real-world deployments require a continual federated learning (CFL) setting, where the GM receives continual updates under device heterogeneity and evolving indoor environments. In such dynamic conditions, erroneous or biased updates can cause the GM to deviate from its expected learning trajectory, gradually degrading internal GM representations and GM localization performance. This vulnerability is further exacerbated by adversarial model poisoning attacks. To address this challenge, we propose ARMOR, a novel CFL-based framework that monitors and safeguards the GM during continual updates. ARMOR introduces a novel state-space model (SSM) that learns the historical evolution of GM weight tensors and predicts the expected next state of weight tensors of the GM. By comparing incoming local updates with this SSM projection, ARMOR detects deviations and selectively mitigates corrupted updates before local updates are aggregated with the GM. This mechanism enables robust adaptation to temporal environmental dynamics and mitigate the effects of model poisoning attacks while preventing GM corruption. Experimental evaluations in real-world conditions indicate that ARMOR achieves notable improvements, with up to 8.0x reduction in mean error and 4.97x reduction in worst-case error compared to state-of-the-art indoor localization frameworks, demonstrating strong resilience against model corruption tested using real-world data and mobile devices.
Problem

Research questions and friction points this paper is trying to address.

continual federated learning
model poisoning attacks
indoor localization
global model corruption
device heterogeneity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Federated Learning
Model Poisoning Defense
State-Space Model
Adaptive Resilience
Mobile Indoor Localization
🔎 Similar Papers
No similar papers found.