🤖 AI Summary
This work addresses the limitations of existing assistive robots, which predominantly rely on scripted instructions and struggle to adapt dynamically to users’ states and environmental context. The authors propose a novel neuro-symbolic hybrid system that integrates knowledge graphs with large language models to enable context-aware, adaptive guidance during structured stretching exercises. By leveraging multimodal perception, the system maintains procedural controllability and predictability while significantly enhancing users’ perception of the guidance’s adaptability and contextual relevance. Preliminary user studies demonstrate the approach’s superior effectiveness compared to conventional scripted strategies.
📝 Abstract
Assistive robots have growing potential to support physical wellbeing in home and healthcare settings, for example, by guiding users through stretching or rehabilitation routines. However, existing systems remain largely scripted, which limits their ability to adapt to user state, environmental context, and interaction dynamics. In this work, we present StretchBot, a hybrid neuro-symbolic robotic coach for adaptive assistive guidance. The system combines multimodal perception with knowledge-graph-grounded large language model reasoning to support context-aware adjustments during short stretching sessions while maintaining a structured routine. To complement the system description, we report an exploratory pilot comparison between scripted and adaptive guidance with three participants. The pilot findings suggest that the adaptive condition improved perceived adaptability and contextual relevance, while scripted guidance remained competitive in smoothness and predictability. These results provide preliminary evidence that structured actionable knowledge can help ground language-model-based adaptation in embodied assistive interaction, while also highlighting the need for larger, longitudinal studies to evaluate robustness, generalizability, and long-term user experience.