🤖 AI Summary
Existing approaches to surgical video question answering predominantly rely on static frame analysis, struggling to capture the complex temporal semantics, low visual contrast, and multi-level cognitive demands inherent in laparoscopic cholecystectomy. To address these challenges, this work proposes a multimodal large language model framework featuring a query-guided visual token selection mechanism to construct a spatiotemporal memory bank, along with a Surgical Competency Progression (SCP) training paradigm that systematically integrates a three-tier task hierarchy spanning perception to reasoning. Evaluated on the newly curated large-scale dataset CholeVidQA-32K—comprising 11 distinct tasks—the proposed method achieves state-of-the-art performance, significantly outperforming existing open-source multimodal and video foundation models under both zero-shot and fine-tuned settings.
📝 Abstract
Surgical procedures are inherently complex and risky, requiring extensive expertise and constant focus to well navigate evolving intraoperative scenes. Computer-assisted systems such as surgical visual question answering (VQA) offer promises for education and intraoperative support. Current surgical VQA research largely focuses on static frame analysis, overlooking rich temporal semantics. Surgical video question answering is further challenged by low visual contrast, its highly knowledge-driven nature, diverse analytical needs spanning scattered temporal windows, and the hierarchy from basic perception to high-level intraoperative assessment. To address these challenges, we propose SurgTEMP, a multimodal LLM framework featuring (i) a query-guided token selection module that builds hierarchical visual memory (spatial and temporal memory banks) and (ii) a Surgical Competency Progression (SCP) training scheme. Together, these components enable effective modeling of variable-length surgical videos while preserving procedure-relevant cues and temporal coherence, and better support diverse downstream assessment tasks. To support model development, we introduce CholeVidQA-32K, a surgical video question answering dataset comprising 32K open-ended QA pairs and 3,855 video segments (approximately 128 h total) from laparoscopic cholecystectomy. The dataset is organized into a three-level hierarchy -- Perception, Assessment, and Reasoning -- spanning 11 tasks from instrument/action/anatomy perception to Critical View of Safety (CVS), intraoperative difficulty, skill proficiency, and adverse event assessment. In comprehensive evaluations against state-of-the-art open-source multimodal and video LLMs (fine-tuned and zero-shot), SurgTEMP achieves substantial performance improvements, advancing the state of video-based surgical VQA.