DualVision ArthroNav: Investigating Opportunities to Enhance Localization and Reconstruction in Image-based Arthroscopy Navigation via External Cameras

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing monocular arthroscopic navigation systems suffer from inherent limitations, including scale ambiguity, cumulative drift, and sensitivity to rapid motion and occlusion. This paper proposes a multi-view navigation framework that fuses data from a rigidly mounted external camera with endoscopic video: the external camera provides robust visual odometry and absolute pose priors, while the arthroscope delivers high-fidelity anatomical detail; their integration via joint optimization enables complementary performance—overcoming both the spatial constraints of optical tracking and the long-term drift of monocular SLAM. The system incorporates multi-view geometric calibration, tightly coupled visual-inertial odometry (VIO), and dense reconstruction. In intraoperative evaluation, it achieves a mean trajectory error of 1.09 mm, a target registration error of 2.16 mm, an SSIM of 0.69, and a PSNR of 22.19—demonstrating substantial improvements in localization accuracy and reconstruction fidelity, with strong clinical feasibility.

Technology Category

Application Category

📝 Abstract
Arthroscopic procedures can greatly benefit from navigation systems that enhance spatial awareness, depth perception, and field of view. However, existing optical tracking solutions impose strict workspace constraints and disrupt surgical workflow. Vision-based alternatives, though less invasive, often rely solely on the monocular arthroscope camera, making them prone to drift, scale ambiguity, and sensitivity to rapid motion or occlusion. We propose DualVision ArthroNav, a multi-camera arthroscopy navigation system that integrates an external camera rigidly mounted on the arthroscope. The external camera provides stable visual odometry and absolute localization, while the monocular arthroscope video enables dense scene reconstruction. By combining these complementary views, our system resolves the scale ambiguity and long-term drift inherent in monocular SLAM and ensures robust relocalization. Experiments demonstrate that our system effectively compensates for calibration errors, achieving an average absolute trajectory error of 1.09 mm. The reconstructed scenes reach an average target registration error of 2.16 mm, with high visual fidelity (SSIM = 0.69, PSNR = 22.19). These results indicate that our system provides a practical and cost-efficient solution for arthroscopic navigation, bridging the gap between optical tracking and purely vision-based systems, and paving the way toward clinically deployable, fully vision-based arthroscopic guidance.
Problem

Research questions and friction points this paper is trying to address.

Enhancing spatial awareness in arthroscopy navigation systems
Resolving scale ambiguity and drift in monocular SLAM
Integrating external cameras for robust surgical localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates external camera on arthroscope for stable localization
Combines external and arthroscope views to resolve scale ambiguity
Achieves millimeter accuracy in trajectory and scene reconstruction
🔎 Similar Papers
No similar papers found.