ProstaTD: A Large-scale Multi-source Dataset for Structured Surgical Triplet Detection

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing surgical triplet detection datasets suffer from coarse spatial annotations, temporally ungrounded labels lacking clinical justification, and poor generalizability due to single-center collection. To address these limitations, we introduce RAP-TRIP—the first large-scale, multi-center surgical triplet dataset specifically designed for robot-assisted prostatectomy—comprising 21 cross-institutional procedures, 60,500 video frames, and 165,600 high-fidelity spatiotemporal triplet annotations. RAP-TRIP pioneers clinically grounded temporal boundaries (defined by expert surgeons) and pixel-accurate spatial bounding boxes, established via a surgeon-led, dual-track iterative annotation protocol. As the largest, most diverse, and clinically validated surgical triplet benchmark to date, RAP-TRIP substantially enhances model generalizability across institutions and surgical workflows. It serves as a foundational resource for developing trustworthy surgical AI systems and standardizing procedural training.

Technology Category

Application Category

📝 Abstract
Surgical triplet detection has emerged as a pivotal task in surgical video analysis, with significant implications for performance assessment and the training of novice surgeons. However, existing datasets such as CholecT50 exhibit critical limitations: they lack precise spatial bounding box annotations, provide inconsistent and clinically ungrounded temporal labels, and rely on a single data source, which limits model generalizability.To address these shortcomings, we introduce ProstaTD, a large-scale, multi-institutional dataset for surgical triplet detection, developed from the technically demanding domain of robot-assisted prostatectomy. ProstaTD offers clinically defined temporal boundaries and high-precision bounding box annotations for each structured triplet action. The dataset comprises 60,529 video frames and 165,567 annotated triplet instances, collected from 21 surgeries performed across multiple institutions, reflecting a broad range of surgical practices and intraoperative conditions. The annotation process was conducted under rigorous medical supervision and involved more than 50 contributors, including practicing surgeons and medically trained annotators, through multiple iterative phases of labeling and verification. ProstaTD is the largest and most diverse surgical triplet dataset to date, providing a robust foundation for fair benchmarking, the development of reliable surgical AI systems, and scalable tools for procedural training.
Problem

Research questions and friction points this paper is trying to address.

Lack precise spatial bounding box annotations in existing datasets
Inconsistent and clinically ungrounded temporal labels in current datasets
Limited model generalizability due to single data source reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale multi-institutional surgical dataset
Clinically defined temporal and spatial annotations
Diverse data from 21 surgeries for robustness
🔎 Similar Papers
No similar papers found.
Y
Yiliang Chen
School of Nursing, The Hong Kong Polytechnic University
Z
Zhixi Li
Nanfang Hospital, Southern Medical University
C
Cheng Xu
School of Nursing, The Hong Kong Polytechnic University
Alex Qinyang Liu
Alex Qinyang Liu
Prince of Wales Hospital / Chinese University of Hong Kong
X
Xuemiao Xu
South China University of Technology
J
J. Teoh
Department of Surgery, The Chinese University of Hong Kong
Shengfeng He
Shengfeng He
Singapore Management University
Visual ComputingGenerative ModelsComputer VisionComputational PhotographyComputer Graphics
Jing Qin
Jing Qin
University of Southern Denmark
MathematicsStatistics