🤖 AI Summary
To address the clinical challenge of spatial constraints and impracticality of deploying depth sensors in minimally invasive tumor resection, this paper proposes a monocular RGB image–driven method for 3D anatomical reconstruction and intraoperative navigation. By integrating semantic segmentation with an optimized Structure-from-Motion (SfM) pipeline, the approach generates anatomically consistent, high-fidelity 3D segmented point clouds, enabling real-time intraoperative 3D scene understanding. Evaluated for the first time on airway tumor resection, the method achieves reconstruction accuracy comparable to—or exceeding—that of state-of-the-art RGB-D systems, without requiring dedicated depth hardware. It matches or surpasses RGB-D baselines in critical metrics such as postoperative tissue model reconstruction, while significantly reducing system footprint and cost. This work establishes a compact, depth-sensor-free paradigm for monocular surgical autonomous navigation, providing a clinically viable technical pathway toward real-time intraoperative guidance and automated surgical intervention.
📝 Abstract
Surgical automation requires precise guidance and understanding of the scene. Current methods in the literature rely on bulky depth cameras to create maps of the anatomy, however this does not translate well to space-limited clinical applications. Monocular cameras are small and allow minimally invasive surgeries in tight spaces but additional processing is required to generate 3D scene understanding. We propose a 3D mapping pipeline that uses only RGB images to create segmented point clouds of the target anatomy. To ensure the most precise reconstruction, we compare different structure from motion algorithms' performance on mapping the central airway obstructions, and test the pipeline on a downstream task of tumor resection. In several metrics, including post-procedure tissue model evaluation, our pipeline performs comparably to RGB-D cameras and, in some cases, even surpasses their performance. These promising results demonstrate that automation guidance can be achieved in minimally invasive procedures with monocular cameras. This study is a step toward the complete autonomy of surgical robots.