🤖 AI Summary
To address the trade-off between low-accuracy Wi-Fi RSSI-based indoor positioning and the high cost and deployment complexity of LiDAR-SLAM, this paper proposes a multi-source heterogeneous sensor fusion framework. It employs a deep neural network (DNN) for coarse Wi-Fi fingerprinting localization, while leveraging LiDAR-SLAM (via Gmapping) and IMU data to construct an environmental map and provide high-precision pose priors. An extended Kalman filter (EKF) dynamically fuses these three complementary observations—RSSI, IMU, and LiDAR-SLAM—to jointly suppress RSSI noise, IMU drift accumulation, and SLAM mapping errors. The method achieves a favorable balance between cost-efficiency and accuracy: experimental results show stable 2D positioning errors within 0.245–0.378 m, significantly outperforming single-sensor baselines. This validates the proposed architecture’s robustness, adaptability, and practical viability for real-world indoor localization.
📝 Abstract
Conventional Wi-Fi received signal strength indicator (RSSI) fingerprinting cannot meet the growing demand for accurate indoor localization and navigation due to its lower accuracy, while solutions based on light detection and ranging (LiDAR) can provide better localization performance but is limited by their higher deployment cost and complexity. To address these issues, we propose a novel indoor localization and navigation framework integrating Wi-Fi RSSI fingerprinting, LiDAR-based simultaneous localization and mapping (SLAM), and inertial measurement unit (IMU) navigation based on an extended Kalman filter (EKF). Specifically, coarse localization by deep neural network (DNN)-based Wi-Fi RSSI fingerprinting is refined by IMU-based dynamic positioning using a Gmapping-based SLAM to generate an occupancy grid map and output high-frequency attitude estimates, which is followed by EKF prediction-update integrating sensor information while effectively suppressing Wi-Fi-induced noise and IMU drift errors. Multi-group real-world experiments conducted on the IR building at Xi'an Jiaotong-Liverpool University demonstrates that the proposed multi-sensor fusion framework suppresses the instability caused by individual approaches and thereby provides stable accuracy across all path configurations with mean two-dimensional (2D) errors ranging from 0.2449 m to 0.3781 m. In contrast, the mean 2D errors of Wi-Fi RSSI fingerprinting reach up to 1.3404 m in areas with severe signal interference, and those of LiDAR/IMU localization are between 0.6233 m and 2.8803 m due to cumulative drift.