🤖 AI Summary
Current intelligent surgical systems lack the capability to universally recognize basic surgical actions (BSAs) across specialties, limiting their application in skill assessment and automated surgical planning. This work addresses this gap by constructing the first large-scale, multi-scenario video dataset comprising over 10,000 clips spanning six surgical specialties and ten BSA categories. We propose the first foundation model for generic BSA recognition, integrating a vision–language architecture with domain-specific surgical knowledge. The model demonstrates strong generalization across surgical specialties and anatomical sites. Its derived skill assessments prove effective in prostatectomy, while the generated surgical planning narratives receive validation from clinical experts across multiple countries in cholecystectomy and nephrectomy, advancing the development of interpretable and generalizable surgical superintelligence.
📝 Abstract
Artificial intelligence, imaging, and large language models have the potential to transform surgical practice, training, and automation. Understanding and modeling of basic surgical actions (BSA), the fundamental unit of operation in any surgery, is important to drive the evolution of this field. In this paper, we present a BSA dataset comprising 10 basic actions across 6 surgical specialties with over 11,000 video clips, which is the largest to date. Based on the BSA dataset, we developed a new foundation model that conducts general-purpose recognition of basic actions. Our approach demonstrates robust cross-specialist performance in experiments validated on datasets from different procedural types and various body parts. Furthermore, we demonstrate downstream applications enabled by the BAS foundation model through surgical skill assessment in prostatectomy using domain-specific knowledge, and action planning in cholecystectomy and nephrectomy using large vision-language models. Multinational surgeons' evaluation of the language model's output of the action planning explainable texts demonstrated clinical relevance. These findings indicate that basic surgical actions can be robustly recognized across scenarios, and an accurate BSA understanding model can essentially facilitate complex applications and speed up the realization of surgical superintelligence.