PISHYAR: A Socially Intelligent Smart Cane for Indoor Social Navigation and Multimodal Human-Robot Interaction for Visually Impaired People

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
This paper presents PISHYAR, a socially intelligent smart cane designed by our group to combine socially aware navigation with multimodal human-AI interaction to support both physical mobility and interactive assistance. The system consists of two components: (1) a social navigation framework implemented on a Raspberry Pi 5 that integrates real-time RGB-D perception using an OAK-D Lite camera, YOLOv8-based object detection, COMPOSER-based collective activity recognition, D* Lite dynamic path planning, and haptic feedback via vibration motors for tasks such as locating a vacant seat; and (2) an agentic multimodal LLM-VLM interaction framework that integrates speech recognition, vision language models, large language models, and text-to-speech, with dynamic routing between voice-only and vision-only modes to enable natural voice-based communication, scene description, and object localization from visual input. The system is evaluated through a combination of simulation-based tests, real-world field experiments, and user-centered studies. Results from simulated and real indoor environments demonstrate reliable obstacle avoidance and socially compliant navigation, achieving an overall system accuracy of approximately 80% under different social conditions. Group activity recognition further shows robust performance across diverse crowd scenarios. In addition, a preliminary exploratory user study with eight visually impaired and low-vision participants evaluates the agentic interaction framework through structured tasks and a UTAUT-based questionnaire reveals high acceptance and positive perceptions of usability, trust, and perceived sociability during our experiments. The results highlight the potential of PISHYAR as a multimodal assistive mobility aid that extends beyond navigation to provide socially interactive support for such users.
Problem

Research questions and friction points this paper is trying to address.

social navigation
visually impaired
multimodal interaction
human-robot interaction
assistive technology
Innovation

Methods, ideas, or system contributions that make the work stand out.

socially intelligent navigation
multimodal LLM-VLM interaction
agentic human-AI interaction
real-time RGB-D perception
haptic feedback for social navigation
🔎 Similar Papers
No similar papers found.
M
Mahdi Haghighat Joo
Social and Cognitive Robotics Laboratory, Sharif University of Technology, Tehran, Iran
M
Maryam Karimi Jafari
Social and Cognitive Robotics Laboratory, Sharif University of Technology, Tehran, Iran
Alireza Taheri
Alireza Taheri
PhD in Mechanical Engineering, Associate Professor, Sharif University of Technology
Social RoboticsCognitive RoboticsHuman-Robot InteractionChildren with Special Needs