๐ค AI Summary
To address the core challenges of privacy preservation, real-time responsiveness, and high false-alarm rates in elderly fall detection, this paper proposes SF2Dโa multi-stage collaborative framework integrating wearable sensing, edge computing, and mobile robot vision to establish an โendโedgeโcloudโrobotโ cooperative system. Methodologically, it introduces semi-supervised federated learning for localized model training with rigorous privacy protection, and synergizes indoor localization/navigation with robot-enabled active visual verification to realize a three-tier decision pipeline: initial fall screening, precise localization, and visual confirmation. Experimental results demonstrate an overall system accuracy of 99.99%, fall detection accuracy of 99.19%, visual recognition accuracy of 96.3%, and navigation success rate of 95%. The framework significantly reduces false alarms while achieving high accuracy, ultra-low latency, and strong privacy guarantees.
๐ Abstract
The aging population is growing rapidly, and so is the danger of falls in older adults. A major cause of injury is falling, and detection in time can greatly save medical expenses and recovery time. However, to provide timely intervention and avoid unnecessary alarms, detection systems must be effective and reliable while addressing privacy concerns regarding the user. In this work, we propose a framework for detecting falls using several complementary systems: a semi-supervised federated learning-based fall detection system (SF2D), an indoor localization and navigation system, and a vision-based human fall recognition system. A wearable device and an edge device identify a fall scenario in the first system. On top of that, the second system uses an indoor localization technique first to localize the fall location and then navigate a robot to inspect the scenario. A vision-based detection system running on an edge device with a mounted camera on a robot is used to recognize fallen people. Each of the systems of this proposed framework achieves different accuracy rates. Specifically, the SF2D has a 0.81% failure rate equivalent to 99.19% accuracy, while the vision-based fallen people detection achieves 96.3% accuracy. However, when we combine the accuracy of these two systems with the accuracy of the navigation system (95% success rate), our proposed framework creates a highly reliable performance for fall detection, with an overall accuracy of 99.99%. Not only is the proposed framework safe for older adults, but it is also a privacy-preserving solution for detecting falls.