Suite-IN++: A FlexiWear BodyNet Integrating Global and Local Motion Features from Apple Suite for Robust Inertial Navigation

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor robustness of traditional pedestrian dead reckoning (PDR) under diverse gait patterns and the limited generalizability of single-device data-driven approaches, this paper proposes FlexiWear BodyNet—the first multi-body-position wearable inertial navigation framework leveraging iPhone, Apple Watch, and AirPods. It introduces a novel global/local motion feature disentanglement architecture: global motion features are fused via device-reliability-weighted aggregation, while local body-position correlations are modeled through a cross-device attention mechanism. By synergistically integrating deep learning, contrastive learning, and multi-sensor cooperative perception, FlexiWear BodyNet achieves state-of-the-art performance on a real-world dataset encompassing multiple gaits and wearable configurations. Experimental results demonstrate a 23.6% improvement in localization accuracy and significantly enhanced robustness against motion variability and environmental interference.

Technology Category

Application Category

📝 Abstract
The proliferation of wearable technology has established multi-device ecosystems comprising smartphones, smartwatches, and headphones as critical enablers for ubiquitous pedestrian localization. However, traditional pedestrian dead reckoning (PDR) struggles with diverse motion modes, while data-driven methods, despite improving accuracy, often lack robustness due to their reliance on a single-device setup. Therefore, a promising solution is to fully leverage existing wearable devices to form a flexiwear bodynet for robust and accurate pedestrian localization. This paper presents Suite-IN++, a deep learning framework for flexiwear bodynet-based pedestrian localization. Suite-IN++ integrates motion data from wearable devices on different body parts, using contrastive learning to separate global and local motion features. It fuses global features based on the data reliability of each device to capture overall motion trends and employs an attention mechanism to uncover cross-device correlations in local features, extracting motion details helpful for accurate localization. To evaluate our method, we construct a real-life flexiwear bodynet dataset, incorporating Apple Suite (iPhone, Apple Watch, and AirPods) across diverse walking modes and device configurations. Experimental results demonstrate that Suite-IN++ achieves superior localization accuracy and robustness, significantly outperforming state-of-the-art models in real-life pedestrian tracking scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhances pedestrian localization using multiple wearable devices
Integrates global and local motion features for accuracy
Improves robustness in diverse walking modes and configurations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates global and local motion features
Uses contrastive learning for feature separation
Employs attention for cross-device correlations
🔎 Similar Papers
No similar papers found.
Lan Sun
Lan Sun
Shanghai Jiao Tong University
Motion Capture with Wearables,inertial navigation,Deep learning
Songpengcheng Xia
Songpengcheng Xia
Shanghai Jiao Tong University
Deep learningWearable ComputingMotion CaptureHARHPE
J
Jiarui Yang
Shanghai Key Laboratory of Navigation and Location-based Services, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China, 200240
Ling Pei
Ling Pei
Shanghai Jiao Tong University
NavigationPositioningSLAMSensor FusionGNSS