🤖 AI Summary
Traditional postoperative follow-up relies on manual interviews and paper-based documentation, resulting in low efficiency and high operational costs; existing digital alternatives—such as web-based questionnaires or automated voice-calling systems—suffer from rigid interaction paradigms or pose patient privacy risks. This paper proposes a multimodal autonomous dialogue robot deployed on edge devices, integrating a lightweight large language model (LLM), on-device multimodal perception, dynamic path planning, and a privacy-by-design architecture to enable secure, adaptive, face-to-face follow-up at the endpoint. Its key contributions include: (i) the first deep integration of an edge-deployed LLM into the clinical follow-up closed loop; (ii) real-time question understanding, patient-state-driven visit scheduling, and automated generation of structured clinical reports. Evaluation demonstrates significant improvements over baselines in follow-up coverage (+32%), patient satisfaction (4.82/5), and report accuracy (96.7%), confirming cross-departmental clinical deployability.
📝 Abstract
Postoperative follow-up plays a crucial role in monitoring recovery and identifying complications. However, traditional approaches, typically involving bedside interviews and manual documentation, are time-consuming and labor-intensive. Although existing digital solutions, such as web questionnaires and intelligent automated calls, can alleviate the workload of nurses to a certain extent, they either deliver an inflexible scripted interaction or face private information leakage issues. To address these limitations, this paper introduces FollowUpBot, an LLM-powered edge-deployed robot for postoperative care and monitoring. It allows dynamic planning of optimal routes and uses edge-deployed LLMs to conduct adaptive and face-to-face conversations with patients through multiple interaction modes, ensuring data privacy. Moreover, FollowUpBot is capable of automatically generating structured postoperative follow-up reports for healthcare institutions by analyzing patient interactions during follow-up. Experimental results demonstrate that our robot achieves high coverage and satisfaction in follow-up interactions, as well as high report generation accuracy across diverse field types. The demonstration video is available at https://www.youtube.com/watch?v=_uFgDO7NoK0.