CARES: Collaborative Agentic Reasoning for Error Detection in Surgery

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of multi-class error detection in robotic-assisted surgery (RAS) under severe annotation scarcity, this paper introduces MERP—the first fine-grained error dataset for radical prostatectomy—and proposes CARES, a zero-shot, clinically informed multi-agent reasoning framework. Its core contributions are: (1) a risk-aware routing mechanism that dynamically assigns error types to expert-level reasoning paths; (2) decomposition of surgical analysis into three specialized agents—spatiotemporal modeling, procedural compliance, and clinical risk assessment—that jointly generate interpretable medical reasoning chains; and (3) integration of clinical-guideline-driven zero-shot prompting, error-specific chain-of-thought reasoning, and multi-tiered risk stratification. On the RARP and MERP benchmarks, CARES achieves mF1 scores of 54.3 and 52.0, respectively—up to 14% higher than prior zero-shot methods—and matches the performance of fully supervised models.

Technology Category

Application Category

📝 Abstract
Robotic-assisted surgery (RAS) introduces complex challenges that current surgical error detection methods struggle to address effectively due to limited training data and methodological constraints. Therefore, we construct MERP (Multi-class Error in Robotic Prostatectomy), a comprehensive dataset for error detection in robotic prostatectomy with frame-level annotations featuring six clinically aligned error categories. In addition, we propose CARES (Collaborative Agentic Reasoning for Error Detection in Surgery), a novel zero-shot clinically-informed and risk-stratified agentic reasoning architecture for multi-class surgical error detection. CARES implements adaptive generation of medically informed, error-specific Chain-of-Thought (CoT) prompts across multiple expertise levels. The framework employs risk-aware routing to assign error task to expertise-matched reasoning pathways based on complexity and clinical impact. Subsequently, each pathway decomposes surgical error analysis into three specialized agents with temporal, spatial, and procedural analysis. Each agent analyzes using dynamically selected prompts tailored to the assigned expertise level and error type, generating detailed and transparent reasoning traces. By incorporating clinically informed reasoning from established surgical assessment guidelines, CARES enables zero-shot surgical error detection without prior training. Evaluation demonstrates superior performance with 54.3 mF1 on RARP and 52.0 mF1 on MERP datasets, outperforming existing zero-shot approaches by up to 14% while remaining competitive with trained models. Ablation studies demonstrate the effectiveness of our method. The dataset and code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Detecting surgical errors in robotic prostatectomy with limited data
Proposing zero-shot error detection using clinically-informed reasoning
Addressing error complexity via risk-stratified multi-agent analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot agentic reasoning for surgical errors
Risk-aware routing to expertise-matched pathways
Dynamic CoT prompts for medical error analysis
🔎 Similar Papers
No similar papers found.
Chang Han Low
Chang Han Low
National University of Singapore
Surgical AIMedical ImagingMulti-Agent SystemMulti-Modal
Zhu Zhuo
Zhu Zhuo
National University of Singapore
Surgical Data ScienceMultimodal Large Language Model
Z
Ziyue Wang
National University of Singapore (NUS), Singapore
Jialang Xu
Jialang Xu
University College London
Surgical robotics visionDeep learningComputer visionWireless communication
Haofeng Liu
Haofeng Liu
National University of Singapore
Image ReconstructionDeep Learning
N
Nazir Sirajudeen
University College London (UCL), UK
M
Matthew Boal
Gloucestershire Hospitals NHS Foundation Trust, UK
P
Philip J. Edwards
University College London (UCL), UK
Danail Stoyanov
Danail Stoyanov
Professor of Robot Vision, University College London
Surgical VisionSurgical AISurgical RoboticsComputer Assisted InterventionsSurgical Data Science
N
Nader Francis
The Griffin Institute, UK
J
Jiehui Zhong
The First Affiliated Hospital of Guangzhou Medical University, China
D
Di Gu
The First Affiliated Hospital of Guangzhou Medical University, China
E
Evangelos B. Mazomenos
University College London (UCL), UK
Yueming Jin
Yueming Jin
Assistant Professor, National University of Singapore
Medical Image AnalysisSurgical AI&RoboticsMultimodal Learning