Surgical-MambaLLM: Mamba2-enhanced Multimodal Large Language Model for VQLA in Robotic Surgery

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak cross-modal dependency modeling and insufficient spatial structure perception in robotic surgical visual question answering (Surgical-VQLA), this paper proposes the first multimodal large language model integrating the Mamba2 architecture. Methodologically, we design a Cross-Modal Bidirectional Interaction (CBMI) module to enhance fine-grained text–vision alignment, and introduce a Surgical Instrument Positioning (SIP) scanning mechanism explicitly modeling instrument spatial layout and operational relationships. Notably, this work pioneers the application of the state-space model Mamba2 to surgical VQA. Evaluated on EndoVis17-VQLA and EndoVis18-VQLA benchmarks, our approach significantly outperforms prior state-of-the-art methods, achieving substantial accuracy improvements. These results validate the model’s effectiveness and generalizability in comprehending complex surgical scenes.

Technology Category

Application Category

📝 Abstract
In recent years, Visual Question Localized-Answering in robotic surgery (Surgical-VQLA) has gained significant attention for its potential to assist medical students and junior doctors in understanding surgical scenes. Recently, the rapid development of Large Language Models (LLMs) has provided more promising solutions for this task. However, current methods struggle to establish complex dependencies between text and visual details, and have difficulty perceiving the spatial information of surgical scenes. To address these challenges, we propose a novel method, Surgical-MambaLLM, which is the first to combine Mamba2 with LLM in the surgical domain, that leverages Mamba2's ability to effectively capture cross-modal dependencies and perceive spatial information in surgical scenes, thereby enhancing the LLMs' understanding of surgical images. Specifically, we propose the Cross-modal Bidirectional Mamba2 Integration (CBMI) module to leverage Mamba2 for effective multimodal fusion, with its cross-modal integration capabilities. Additionally, tailored to the geometric characteristics of surgical scenes, we design the Surgical Instrument Perception (SIP) scanning mode for Mamba2 to scan the surgical images, enhancing the model's spatial understanding of the surgical scene. Extensive experiments demonstrate that our Surgical-MambaLLM model outperforms the state-of-the-art methods on the EndoVis17-VQLA and EndoVis18-VQLA datasets, significantly improving the performance of the Surgical-VQLA task.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual question localized-answering in robotic surgery
Improving cross-modal dependencies between text and visual details
Better perception of spatial information in surgical scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Mamba2 with LLM for surgical visual understanding
Uses Cross-modal Bidirectional Mamba2 for multimodal fusion
Implements Surgical Instrument Perception scanning for spatial awareness