🤖 AI Summary
To address visual limitations from single-view teleoperation and insufficient human-robot collaboration in minimally invasive surgery, this paper proposes a multi-view enhanced shared autonomy surgical paradigm. Methodologically, we develop an open-source multi-view laparoscopic teleoperation robotic system: integrating a high-frame-rate synchronized multi-camera array with real-time stereo matching and 3D reconstruction algorithms, and extending the da Vinci Research Kit’s control logic to support dual-view independent manipulation and ROS2-driven shared autonomous decision-making. We introduce the first open-source multi-view laparoscopic visualization framework, enabling intraoperative real-time 3D perception, asynchronous multi-surgeon viewpoint coordination, and intuitive instrument mapping. Prototype validation confirms dual-view synchronized imaging and teleoperation functionality. The resulting system provides a reproducible, extensible, full-stack open-source research platform for shared autonomy in robotic surgery.
📝 Abstract
As robots for minimally invasive surgery (MIS) gradually become more accessible and modular, we believe there is a great opportunity to rethink and expand the visualization and control paradigms that have characterized surgical teleoperation since its inception. We conjecture that introducing one or more additional adjustable viewpoints in the abdominal cavity would not only unlock novel visualization and collaboration strategies for surgeons but also substantially boost the robustness of machine perception toward shared autonomy. Immediate advantages include controlling a second viewpoint and teleoperating surgical tools from a different perspective, which would allow collaborating surgeons to adjust their views independently and still maneuver their robotic instruments intuitively. Furthermore, we believe that capturing synchronized multi-view 3D measurements of the patient's anatomy would unlock advanced scene representations. Accurate real-time intraoperative 3D perception will allow algorithmic assistants to directly control one or more robotic instruments and/or robotic cameras. Toward these goals, we are building a synchronized multi-viewpoint, multi-sensor robotic surgery system by integrating high-performance vision components and upgrading the da Vinci Research Kit control logic. This short paper reports a functional summary of our setup and elaborates on its potential impacts in research and future clinical practice. By fully open-sourcing our system, we will enable the research community to reproduce our setup, improve it, and develop powerful algorithms, effectively boosting clinical translation of cutting-edge research.