Markerless 6D Pose Estimation and Position-Based Visual Servoing for Endoscopic Continuum Manipulators

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of achieving high-precision pose estimation and closed-loop control for flexible endoscopic continuum manipulators, which is hindered by hysteresis, compliance, and limited distal sensing. The work proposes the first fully markerless, sensor-free visual servoing framework that integrates stereo-vision-based 6D pose estimation with position-based control. The approach leverages a multi-feature fusion network incorporating segmentation masks, keypoints, heatmaps, and bounding boxes, enhanced by feedforward rendering residual refinement, photorealistic simulation training, and a self-supervised domain adaptation strategy. Evaluated on 1,000 real-world samples, the method achieves an average pose estimation error of 0.83 mm in translation and 2.76° in rotation. In closed-loop trajectory tracking, it yields errors of 2.07 mm and 7.41°, representing reductions of 85% and 59%, respectively, compared to open-loop control.

Technology Category

Application Category

📝 Abstract
Continuum manipulators in flexible endoscopic surgical systems offer high dexterity for minimally invasive procedures; however, accurate pose estimation and closed-loop control remain challenging due to hysteresis, compliance, and limited distal sensing. Vision-based approaches reduce hardware complexity but are often constrained by limited geometric observability and high computational overhead, restricting real-time closed-loop applicability. This paper presents a unified framework for markerless stereo 6D pose estimation and position-based visual servoing of continuum manipulators. A photo-realistic simulation pipeline enables large-scale automatic training with pixel-accurate annotations. A stereo-aware multi-feature fusion network jointly exploits segmentation masks, keypoints, heatmaps, and bounding boxes to enhance geometric observability. To enforce geometric consistency without iterative optimization, a feed-forward rendering-based refinement module predicts residual pose corrections in a single pass. A self-supervised sim-to-real adaptation strategy further improves real-world performance using unlabeled data. Extensive real-world validation achieves a mean translation error of 0.83 mm and a mean rotation error of 2.76° across 1,000 samples. Markerless closed-loop visual servoing driven by the estimated pose attains accurate trajectory tracking with a mean translation error of 2.07 mm and a mean rotation error of 7.41°, corresponding to 85% and 59% reductions compared to open-loop control, together with high repeatability in repeated point-reaching tasks. To the best of our knowledge, this work presents the first fully markerless pose-estimation-driven position-based visual servoing framework for continuum manipulators, enabling precise closed-loop control without physical markers or embedded sensing.
Problem

Research questions and friction points this paper is trying to address.

continuum manipulators
6D pose estimation
visual servoing
markerless
endoscopic surgery
Innovation

Methods, ideas, or system contributions that make the work stand out.

markerless pose estimation
continuum manipulator
position-based visual servoing
multi-feature fusion
sim-to-real adaptation
🔎 Similar Papers
No similar papers found.
Junhyun Park
Junhyun Park
Daegu Gyeongbuk Institute of Science and Technology (DGIST)
RoboticsAISurgical RoboticsBiomedical EngineeringAutonomy
C
Chunggil An
Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
Myeongbo Park
Myeongbo Park
DGIST
Robot
Ihsan Ullah
Ihsan Ullah
University of Balochistan, Quetta, Pakistan
P2P video streamingP2P IPTVIPTV User BehaviorIoTMultimedia Communication
S
Sihyeong Park
Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
Minho Hwang
Minho Hwang
Daegu Gyeongbuk Institute of Science and Technology (DGIST)
Robotics and ControlAutomation and LearningSurgical roboticsMechanism Design