๐ค AI Summary
To address the challenge of simultaneously achieving robustness and precision in visual servoing for tendon-driven continuum robots operating in dynamic, unstructured environments, this paper proposes a Hybrid Visual Servoing (HVS) method. HVS introduces a novel dynamic switching mechanism that seamlessly integrates Image-Based Visual Servoing (IBVS) and Deep LearningโBased Visual Servoing (DLBVS). The framework synergistically combines classical image Jacobian control, end-to-end CNN-based pose estimation, and an adaptive mode-switching strategy. This integration ensures high accuracy and rapid convergence while significantly improving resilience to occlusion, illumination variations, and physical disturbances. Experimental results demonstrate that, compared to pure DLBVS, HVS reduces iteration time by 32%, decreases final-state pose error by 47%, accelerates convergence by a factor of 2.1, and maintains stable closed-loop control under strong external perturbations.
๐ Abstract
This paper introduces a novel Hybrid Visual Servoing (HVS) approach for controlling tendon-driven continuum robots (TDCRs). The HVS system combines Image-Based Visual Servoing (IBVS) with Deep Learning-Based Visual Servoing (DLBVS) to overcome the limitations of each method and improve overall performance. IBVS offers higher accuracy and faster convergence in feature-rich environments, while DLBVS enhances robustness against disturbances and offers a larger workspace. By enabling smooth transitions between IBVS and DLBVS, the proposed HVS ensures effective control in dynamic, unstructured environments. The effectiveness of this approach is validated through simulations and real-world experiments, demonstrating that HVS achieves reduced iteration time, faster convergence, lower final error, and smoother performance compared to DLBVS alone, while maintaining DLBVS's robustness in challenging conditions such as occlusions, lighting changes, actuator noise, and physical impacts.