🤖 AI Summary
This work addresses autonomous laparoscopic camera control by maintaining a stable, safe, and semantically consistent surgical field of view during high-speed instrument-tissue interactions. The authors propose an event-driven, structured approach that first mines temporal events from offline surgical videos to construct an attributed event graph, capturing reusable camera control primitives. During online execution, this graph guides a fine-tuned vision-language model (VLM) to predict both camera strategies and motion commands, which are then executed by an IBVS-RCM controller and support voice-based intervention. This study is the first to integrate event graph mining with policy supervision, achieving interpretable, safe, and expert-aligned autonomous control. Experiments demonstrate an event parsing F1 score of 0.86 and strategy clustering purity of 0.81; in ex vivo tests, the method reduces field-of-view centering error by 35.26% and image jitter by 62.33%, outperforming junior surgeons.
📝 Abstract
Autonomous laparoscopic camera control must maintain a stable and safe surgical view under rapid tool-tissue interactions while remaining interpretable to surgeons. We present a strategy-grounded framework that couples high-level vision-language inference with low-level closed-loop control. Offline, raw surgical videos are parsed into camera-relevant temporal events (e.g., interaction, working-distance deviation, and view-quality degradation) and structured as attributed event graphs. Mining these graphs yields a compact set of reusable camera-handling strategy primitives, which provide structured supervision for learning. Online, a fine-tuned Vision-Language Model (VLM) processes the live laparoscopic view to predict the dominant strategy and discrete image-based motion commands, executed by an IBVS-RCM controller under strict safety constraints; optional speech input enables intuitive human-in-the-loop conditioning. On a surgeon-annotated dataset, event parsing achieves reliable temporal localization (F1-score 0.86), and the mined strategies show strong semantic alignment with expert interpretation (cluster purity 0.81). Extensive ex vivo experiments on silicone phantoms and porcine tissues demonstrate that the proposed system outperforms junior surgeons in standardized camera-handling evaluations, reducing field-of-view centering error by 35.26% and image shaking by 62.33%, while preserving smooth motion and stable working-distance regulation.