Strategy-Supervised Autonomous Laparoscopic Camera Control via Event-Driven Graph Mining

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses autonomous laparoscopic camera control by maintaining a stable, safe, and semantically consistent surgical field of view during high-speed instrument-tissue interactions. The authors propose an event-driven, structured approach that first mines temporal events from offline surgical videos to construct an attributed event graph, capturing reusable camera control primitives. During online execution, this graph guides a fine-tuned vision-language model (VLM) to predict both camera strategies and motion commands, which are then executed by an IBVS-RCM controller and support voice-based intervention. This study is the first to integrate event graph mining with policy supervision, achieving interpretable, safe, and expert-aligned autonomous control. Experiments demonstrate an event parsing F1 score of 0.86 and strategy clustering purity of 0.81; in ex vivo tests, the method reduces field-of-view centering error by 35.26% and image jitter by 62.33%, outperforming junior surgeons.

Technology Category

Application Category

📝 Abstract
Autonomous laparoscopic camera control must maintain a stable and safe surgical view under rapid tool-tissue interactions while remaining interpretable to surgeons. We present a strategy-grounded framework that couples high-level vision-language inference with low-level closed-loop control. Offline, raw surgical videos are parsed into camera-relevant temporal events (e.g., interaction, working-distance deviation, and view-quality degradation) and structured as attributed event graphs. Mining these graphs yields a compact set of reusable camera-handling strategy primitives, which provide structured supervision for learning. Online, a fine-tuned Vision-Language Model (VLM) processes the live laparoscopic view to predict the dominant strategy and discrete image-based motion commands, executed by an IBVS-RCM controller under strict safety constraints; optional speech input enables intuitive human-in-the-loop conditioning. On a surgeon-annotated dataset, event parsing achieves reliable temporal localization (F1-score 0.86), and the mined strategies show strong semantic alignment with expert interpretation (cluster purity 0.81). Extensive ex vivo experiments on silicone phantoms and porcine tissues demonstrate that the proposed system outperforms junior surgeons in standardized camera-handling evaluations, reducing field-of-view centering error by 35.26% and image shaking by 62.33%, while preserving smooth motion and stable working-distance regulation.
Problem

Research questions and friction points this paper is trying to address.

autonomous laparoscopic camera control
surgical view stability
tool-tissue interaction
interpretability
camera-handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-driven graph mining
strategy-supervised learning
vision-language model
autonomous camera control
IBVS-RCM
🔎 Similar Papers
No similar papers found.
K
Keyu Zhou
Hangzhou Dianzi University, Hangzhou 310018, China
Peisen Xu
Peisen Xu
Doctor of Philosophy, National University of Singapore
human computer interactionhuman robot interactionvirtual realityaugmented reality
Y
Yahao Wu
Hangzhou Dianzi University, Hangzhou 310018, China
J
Jiming Chen
Hangzhou Dianzi University, Hangzhou 310018, China
G
Gaofeng Li
Zhejiang University, Hangzhou 310058, China
Shunlei Li
Shunlei Li
The Chinese University of Hong Kong
RoboticsComputer VisionAI for Science