An AI-driven multimodal smart home platform for continuous monitoring and intelligent assistance in post-stroke patients

📅 2024-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Stroke survivors face significant challenges in home-based rehabilitation, including insufficient personalization, fragmented multimodal monitoring, and disjointed assistive functionalities. To address these issues, we propose the first multimodal intelligent home platform specifically designed for post-stroke home rehabilitation. The system integrates flexible piezoelectric insole sensing, wearable eye-tracking, and ambient environmental perception to enable gait-phase recognition, cognitive assessment, hands-free interaction, and low-latency response. We introduce a novel localized multimodal IoT fusion architecture and an embedded lightweight large language model (LLM)-driven care agent—Auto-Care—that delivers real-time, privacy-preserving closed-loop interventions directly on-device. Experimental evaluation demonstrates 94% accuracy in gait-phase classification, sub-1-second environmental interaction latency, and a statistically significant 115% improvement in user satisfaction (p < 0.01). This work establishes a scalable technical paradigm bridging neurorehabilitation and aging-in-place.

Technology Category

Application Category

📝 Abstract
At-home rehabilitation for post-stroke patients presents significant challenges, as continuous, personalized care is often limited outside clinical settings. Additionally, the absence of comprehensive solutions addressing diverse monitoring and assistance needs in home environments complicates recovery efforts. Here, we present a multimodal smart home platform designed for continuous, at-home rehabilitation of post-stroke patients, integrating wearable sensing, ambient monitoring, and adaptive automation. A plantar pressure insole equipped with a machine learning pipeline classifies users into motor recovery stages with up to 94% accuracy, enabling quantitative tracking of walking patterns. A head-mounted eye-tracking module supports cognitive assessments and hands-free control of household devices, while ambient sensors ensure sub-second response times for interaction. These data streams are fused locally via a hierarchical Internet of Things (IoT) architecture, protecting privacy and minimizing latency. An embedded large language model (LLM) agent, Auto-Care, continuously interprets multimodal data to provide real-time interventions-issuing personalized reminders, adjusting environmental conditions, and notifying caregivers. Implemented in a post-stroke context, this integrated smart home platform increases overall user satisfaction by an average of 115% (p<0.01) compared to traditional home environment. Beyond stroke, the system offers a scalable framework for patient-centered, long-term care in broader neurorehabilitation and aging-in-place applications.
Problem

Research questions and friction points this paper is trying to address.

Develops AI-driven smart home for post-stroke rehabilitation.
Integrates wearable, ambient sensors for real-time patient monitoring.
Uses IoT and LLM for personalized, privacy-protected care solutions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-driven multimodal smart home platform
Wearable sensing and ambient monitoring integration
Embedded LLM agent for real-time interventions
C
Chenyu Tang
Hangzhou International Innovation Institute, Beihang University, Hangzhou, China
R
Ruizhi Zhang
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China
Shuo Gao
Shuo Gao
Beihang University, University of Cambridge (Ph.D.)
AI for HealthcareWearable SystemsHuman Body Digital TwinsNeural Computing
Z
Zihe Zhao
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China
Z
Zibo Zhang
Department of Engineering, University of Cambridge, Cambridge, UK
J
Jiaqi Wang
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China
C
Cong Li
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China
J
Junliang Chen
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China
Y
Yanning Dai
AI Initiative, KAUST, Thuwal, Saudi Arabia
S
Shengbo Wang
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China
R
Ruoyu Juan
Beijing New Guoxin Software Evaluation Technology Coltd, Beijing, China
Q
Qiaoying Li
Stomatology Department, Shijiazhuang People's Hospital, Shijiazhuang, China
R
Ruimou Xie
Department of Rehabilitation Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China
Xuhang Chen
Xuhang Chen
Huizhou University
computational imaginglow-level visioncomputational photography
X
Xinkai Zhou
HUB of Intelligent Neuro-engineering (HUBIN), CREATe, Division of Surgery and Interventional Science, UCL, Stanmore, UK
Y
Yunjia Xia
HUB of Intelligent Neuro-engineering (HUBIN), CREATe, Division of Surgery and Interventional Science, UCL, Stanmore, UK
J
Jianan Chen
HUB of Intelligent Neuro-engineering (HUBIN), CREATe, Division of Surgery and Interventional Science, UCL, Stanmore, UK
F
Fanghao Lu
Hangzhou International Innovation Institute, Beihang University, Hangzhou, China
X
Xin Li
Department of Rehabilitation Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China
N
Ninglli Wang
Beijing Tongren Hospital, Capital Medical University, Beijing, China
P
P. Smielewski
Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
Y
Yu Pan
Department of Rehabilitation Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China
H
Hubin Zhao
HUB of Intelligent Neuro-engineering (HUBIN), CREATe, Division of Surgery and Interventional Science, UCL, Stanmore, UK
L
L. Occhipinti
Department of Engineering, University of Cambridge, Cambridge, UK