Unified Class and Domain Incremental Learning with Mixture of Experts for Indoor Localization

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In indoor localization, device heterogeneity induces domain shift, while environmental evolution causes class shift, rendering static models obsolete over time. To address these challenges, we propose the first lightweight, sustainable localization framework that jointly handles class-incremental and domain-incremental learning. Methodologically, it integrates a Mixture-of-Experts (MoE) architecture with an ETF-based gating mechanism to enable low-latency, high-accuracy dynamic expert routing and adaptive inference; further, it synergizes incremental learning with domain adaptation to support continuous deployment across heterogeneous devices and evolving environments. Experiments demonstrate that our framework reduces average localization error by 25.6×, worst-case error by 44.5×, and catastrophic forgetting by 21.5× compared to state-of-the-art approaches—marking significant advances in robustness, efficiency, and long-term adaptability.

Technology Category

Application Category

📝 Abstract
Indoor localization using machine learning has gained traction due to the growing demand for location-based services. However, its long-term reliability is hindered by hardware/software variations across mobile devices, which shift the model's input distribution to create domain shifts. Further, evolving indoor environments can introduce new locations over time, expanding the output space to create class shifts, making static machine learning models ineffective over time. To address these challenges, we propose a novel unified continual learning framework for indoor localization called MOELO that, for the first time, jointly addresses domain-incremental and class-incremental learning scenarios. MOELO enables a lightweight, robust, and adaptive localization solution that can be deployed on resource-limited mobile devices and is capable of continual learning in dynamic, heterogeneous real-world settings. This is made possible by a mixture-of-experts architecture, where experts are incrementally trained per region and selected through an equiangular tight frame based gating mechanism ensuring efficient routing, and low-latency inference, all within a compact model footprint. Experimental evaluations show that MOELO achieves improvements of up to 25.6x in mean localization error, 44.5x in worst-case localization error, and 21.5x lesser forgetting compared to state-of-the-art frameworks across diverse buildings, mobile devices, and learning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain shifts from mobile device hardware variations
Solves class shifts from new locations in evolving environments
Enables continual learning for indoor localization on resource-limited devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-experts architecture enables incremental learning
Equiangular tight frame gating ensures efficient expert routing
Compact model supports low-latency inference on mobile devices
🔎 Similar Papers
No similar papers found.