GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant For Blind Travelers

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current robotic navigation systems for blind and low-vision (BLV) users lack design paradigms grounded in authentic user experience. This paper proposes a vision-only navigation method inspired by guide dog training, introducing— for the first time—the “teaching–replay” paradigm from canine guidance into robotic system design. It eliminates expensive sensors (e.g., LiDAR) and constructs a lightweight topological map via visual place recognition, temporal filtering, and relative pose estimation to enable robust path memorization and replay. Deployed on a quadrupedal robot, the system achieves kilometer-scale, cross-temporal, and cross-environment repeated navigation across five outdoor scenes, with a path success rate exceeding 95%. A user study confirms its practical feasibility for current guide dog users. The core contribution is the formalization of biologically informed assistance strategies into a transferable robotic navigation framework, empirically validating the reliability of low-cost vision-based solutions in long-range, dynamic real-world settings.

Technology Category

Application Category

📝 Abstract
While commendable progress has been made in user-centric research on mobile assistive systems for blind and low-vision (BLV) individuals, references that directly inform robot navigation design remain rare. To bridge this gap, we conducted a comprehensive human study involving interviews with 26 guide dog handlers, four white cane users, nine guide dog trainers, and one O&M trainer, along with 15+ hours of observing guide dog-assisted walking. After de-identification, we open-sourced the dataset to promote human-centered development and informed decision-making for assistive systems for BLV people. Building on insights from this formative study, we developed GuideNav, a vision-only, teach-and-repeat navigation system. Inspired by how guide dogs are trained and assist their handlers, GuideNav autonomously repeats a path demonstrated by a sighted person using a robot. Specifically, the system constructs a topological representation of the taught route, integrates visual place recognition with temporal filtering, and employs a relative pose estimator to compute navigation actions - all without relying on costly, heavy, power-hungry sensors such as LiDAR. In field tests, GuideNav consistently achieved kilometer-scale route following across five outdoor environments, maintaining reliability despite noticeable scene variations between teach and repeat runs. A user study with 3 guide dog handlers and 1 guide dog trainer further confirmed the system's feasibility, marking (to our knowledge) the first demonstration of a quadruped mobile system retrieving a path in a manner comparable to guide dogs.
Problem

Research questions and friction points this paper is trying to address.

Develops a vision-only robotic navigation assistant for blind travelers.
Addresses the lack of user-informed design in robot navigation systems.
Enables autonomous route following without expensive sensors like LiDAR.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-only teach-and-repeat navigation system
Topological route representation with visual place recognition
Autonomous kilometer-scale outdoor path following
🔎 Similar Papers
No similar papers found.