AFT: Appearance-Based Feature Tracking for Markerless and Training-Free Shape Reconstruction of Soft Robots

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address bottlenecks in soft robot 3D shape reconstruction—namely, reliance on artificial markers, large-scale annotated training data, or complex multi-camera setups—this paper proposes a marker-free, training-free vision-based method that achieves real-time, robust 3D reconstruction solely from the robot’s natural surface texture. The approach employs a hierarchical matching strategy: at the lower level, appearance-based feature tracking and implicit visual landmark matching enable multi-view local alignment; at the upper level, kinematic constraints guide global optimization, decoupling deformation modeling from rigid-body motion. The framework requires no specialized background, labeled datasets, or custom hardware, enabling real-time continuum robot tracking in dynamic environments (average end-effector localization error: 2.6% of total length) and demonstrating stability and practicality in closed-loop control. Its core contribution is the first end-to-end, unsupervised 3D shape reconstruction paradigm fully grounded in natural appearance cues.

Technology Category

Application Category

📝 Abstract
Accurate shape reconstruction is essential for precise control and reliable operation of soft robots. Compared to sensor-based approaches, vision-based methods offer advantages in cost, simplicity, and ease of deployment. However, existing vision-based methods often rely on complex camera setups, specific backgrounds, or large-scale training datasets, limiting their practicality in real-world scenarios. In this work, we propose a vision-based, markerless, and training-free framework for soft robot shape reconstruction that directly leverages the robot's natural surface appearance. These surface features act as implicit visual markers, enabling a hierarchical matching strategy that decouples local partition alignment from global kinematic optimization. Requiring only an initial 3D reconstruction and kinematic alignment, our method achieves real-time shape tracking across diverse environments while maintaining robustness to occlusions and variations in camera viewpoints. Experimental validation on a continuum soft robot demonstrates an average tip error of 2.6% during real-time operation, as well as stable performance in practical closed-loop control tasks. These results highlight the potential of the proposed approach for reliable, low-cost deployment in dynamic real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Develops markerless vision-based shape reconstruction for soft robots
Eliminates need for complex camera setups or training datasets
Enables real-time tracking robust to occlusions and viewpoint changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Markerless tracking using natural surface appearance
Hierarchical matching with local and global optimization
Training-free real-time shape reconstruction framework
S
Shangyuan Yuan
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA
P
Preston Fairchild
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA
Yu Mei
Yu Mei
Michigan State University
Soft RoboticsControl
X
Xinyu Zhou
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA
Xiaobo Tan
Xiaobo Tan
Michigan State University
Controlmechatronicsroboticssmart materials