Scalable Wi-Fi RSS-Based Indoor Localization via Automatic Vision-Assisted Calibration

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Wi-Fi RSS-based indoor localization suffers from sensitivity to multipath effects, poor robustness due to channel and device heterogeneity, and heavy reliance on costly, labor-intensive labeled data for supervised learning. To address these challenges, this paper proposes a lightweight vision-aided calibration framework. Leveraging a single-calibration ArUco marker–based camera system, it enables high-precision motion tracking and synchronized multi-channel Wi-Fi RSS acquisition, automatically generating high-resolution RSS–position pairs without repeated deployment or manual annotation. The framework supports cross-device and cross-channel generalization while preserving privacy and ensuring scalability. Experimental results demonstrate significantly improved localization accuracy over conventional methods. All code, calibration tools, and datasets are publicly released, facilitating practical deployment of privacy-preserving indoor localization systems.

Technology Category

Application Category

📝 Abstract
Wi-Fi-based positioning promises a scalable and privacy-preserving solution for location-based services in indoor environments such as malls, airports, and campuses. RSS-based methods are widely deployable as RSS data is available on all Wi-Fi-capable devices, but RSS is highly sensitive to multipath, channel variations, and receiver characteristics. While supervised learning methods offer improved robustness, they require large amounts of labeled data, which is often costly to obtain. We introduce a lightweight framework that solves this by automating high-resolution synchronized RSS-location data collection using a short, camera-assisted calibration phase. An overhead camera is calibrated only once with ArUco markers and then tracks a device collecting RSS data from broadcast packets of nearby access points across Wi-Fi channels. The resulting (x, y, RSS) dataset is used to automatically train mobile-deployable localization algorithms, avoiding the privacy concerns of continuous video monitoring. We quantify the accuracy limits of such vision-assisted RSS data collection under key factors such as tracking precision and label synchronization. Using the collected experimental data, we benchmark traditional and supervised learning approaches under varying signal conditions and device types, demonstrating improved accuracy and generalization, validating the utility of the proposed framework for practical use. All code, tools, and datasets are released as open source.
Problem

Research questions and friction points this paper is trying to address.

Automating high-resolution synchronized RSS-location data collection for indoor positioning
Reducing costly labeled data requirements in supervised Wi-Fi localization methods
Addressing RSS sensitivity to multipath and device variations through vision-assisted calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated vision-assisted calibration for Wi-Fi RSS data collection
Overhead camera tracking with ArUco markers for synchronization
Generates mobile-deployable localization models without continuous monitoring
🔎 Similar Papers
No similar papers found.