Vision-Aided Online A* Path Planning for Efficient and Safe Navigation of Service Robots

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional navigation systems rely on LiDAR for geometric perception alone, lacking semantic understanding and thus failing to identify context-critical objects (e.g., scattered important documents), which compromises safety and practicality. To address this, we propose a lightweight framework that tightly integrates visual semantic perception with online A* path planning. Our method employs an embedded-friendly semantic segmentation model to detect user-defined visual constraints in real time; these are projected as dynamic non-geometric obstacles and fused with geometric sensor data to continuously update a global semantic map. This map drives online A* planning for context-aware, real-time obstacle avoidance. To the best of our knowledge, this is the first work to achieve closed-loop navigation on low-cost embedded platforms wherein obstacle definitions are dynamically reconstructed based on visual context. Experiments in simulation and on real robotic platforms demonstrate low latency, high robustness, and significantly improved safety and task adaptability for service robots operating in complex, unknown environments.

Technology Category

Application Category

📝 Abstract
The deployment of autonomous service robots in human-centric environments is hindered by a critical gap in perception and planning. Traditional navigation systems rely on expensive LiDARs that, while geometrically precise, are seman- tically unaware, they cannot distinguish a important document on an office floor from a harmless piece of litter, treating both as physically traversable. While advanced semantic segmentation exists, no prior work has successfully integrated this visual intelligence into a real-time path planner that is efficient enough for low-cost, embedded hardware. This paper presents a frame- work to bridge this gap, delivering context-aware navigation on an affordable robotic platform. Our approach centers on a novel, tight integration of a lightweight perception module with an online A* planner. The perception system employs a semantic segmentation model to identify user-defined visual constraints, enabling the robot to navigate based on contextual importance rather than physical size alone. This adaptability allows an operator to define what is critical for a given task, be it sensitive papers in an office or safety lines in a factory, thus resolving the ambiguity of what to avoid. This semantic perception is seamlessly fused with geometric data. The identified visual constraints are projected as non-geometric obstacles onto a global map that is continuously updated from sensor data, enabling robust navigation through both partially known and unknown environments. We validate our framework through extensive experiments in high-fidelity simulations and on a real-world robotic platform. The results demonstrate robust, real-time performance, proving that a cost- effective robot can safely navigate complex environments while respecting critical visual cues invisible to traditional planners.
Problem

Research questions and friction points this paper is trying to address.

Traditional navigation systems lack semantic awareness for context-sensitive obstacle avoidance
No prior work integrates visual intelligence into real-time path planners for embedded hardware
Autonomous robots cannot distinguish important objects from harmless litter using LiDAR alone
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight semantic segmentation identifies visual constraints
Online A* planner integrates semantic and geometric data
Projects non-geometric obstacles onto continuously updated map
🔎 Similar Papers
No similar papers found.
P
Praveen Kumar
Department of Electrical Engineering, Indian Institute of Technology Kanpur, India
Tushar Sandhan
Tushar Sandhan
Assistant Professor, Electrical Engineering, IIT Kanpur
Computer visionMachine learningRobotics