Image-Based Roadmaps for Vision-Only Planning and Control of Robotic Manipulators

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses collision-free motion planning and visual servoing for robots lacking geometric models and proprioceptive sensors. Methodologically, it constructs a motion planning roadmap directly in image feature space: keypoint-based sampling establishes nodes; neighborhood connections are formed using learned or predefined image-distance metrics; and visual features enable end-to-end collision detection and path search. An adaptive visual servo controller is then tightly coupled to execute path following. The core contribution is the first proposal of a fully geometry- and joint-state–free, vision-native roadmap paradigm—unifying planning and control entirely within the image space. Experiments demonstrate that the learned roadmap achieves 100% control convergence success rate, validating both the effectiveness and robustness of purely vision-based path planning and closed-loop trajectory tracking.

Technology Category

Application Category

📝 Abstract
This work presents a motion planning framework for robotic manipulators that computes collision-free paths directly in image space. The generated paths can then be tracked using vision-based control, eliminating the need for an explicit robot model or proprioceptive sensing. At the core of our approach is the construction of a roadmap entirely in image space. To achieve this, we explicitly define sampling, nearest-neighbor selection, and collision checking based on visual features rather than geometric models. We first collect a set of image-space samples by moving the robot within its workspace, capturing keypoints along its body at different configurations. These samples serve as nodes in the roadmap, which we construct using either learned or predefined distance metrics. At runtime, the roadmap generates collision-free paths directly in image space, removing the need for a robot model or joint encoders. We validate our approach through an experimental study in which a robotic arm follows planned paths using an adaptive vision-based control scheme to avoid obstacles. The results show that paths generated with the learned-distance roadmap achieved 100% success in control convergence, whereas the predefined image-space distance roadmap enabled faster transient responses but had a lower success rate in convergence.
Problem

Research questions and friction points this paper is trying to address.

Vision-only motion planning for robots
Collision-free paths in image space
Eliminates need for robot models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Image-space motion planning
Vision-based control
Collision-free path generation
🔎 Similar Papers
No similar papers found.