Towards Explainable Indoor Localization: Interpreting Neural Network Learning on Wi-Fi Fingerprints Using Logic Gates

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of interpretability and poor robustness against time-varying environmental noise in deep learning–based Wi-Fi fingerprint indoor localization, this paper proposes LogNet—a logic-gate-driven neural framework that pioneers the integration of programmable logic gates into neural architectures to enable transparent decision modeling and diagnostic capability. LogNet supports reference-point-level critical access point identification and quantifies noise propagation paths affecting localization decisions. By synergizing Wi-Fi RSS fingerprint analysis, model behavior inversion, and time-varying robustness adaptation, LogNet is rigorously evaluated on multi-floor real-world deployments and two-year longitudinal data. Results demonstrate 1.1–2.8× reduction in localization error, 3.4–43.3× compression in model size, and 1.5–3.6× decrease in inference latency. These improvements significantly enhance long-term deployment reliability and maintainability.

Technology Category

Application Category

📝 Abstract
Indoor localization using deep learning (DL) has demonstrated strong accuracy in mapping Wi-Fi RSS fingerprints to physical locations; however, most existing DL frameworks function as black-box models, offering limited insight into how predictions are made or how models respond to real-world noise over time. This lack of interpretability hampers our ability to understand the impact of temporal variations - caused by environmental dynamics - and to adapt models for long-term reliability. To address this, we introduce LogNet, a novel logic gate-based framework designed to interpret and enhance DL-based indoor localization. LogNet enables transparent reasoning by identifying which access points (APs) are most influential for each reference point (RP) and reveals how environmental noise disrupts DL-driven localization decisions. This interpretability allows us to trace and diagnose model failures and adapt DL systems for more stable long-term deployments. Evaluations across multiple real-world building floorplans and over two years of temporal variation show that LogNet not only interprets the internal behavior of DL models but also improves performance-achieving up to 1.1x to 2.8x lower localization error, 3.4x to 43.3x smaller model size, and 1.5x to 3.6x lower latency compared to prior DL-based models.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability of deep learning in indoor localization
Addressing impact of environmental noise on Wi-Fi fingerprint models
Improving long-term reliability and performance of localization systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logic gate-based framework for interpretable localization
Identifies influential access points for transparent reasoning
Improves performance with lower error and latency
🔎 Similar Papers
No similar papers found.